00:00:00.001 Started by upstream project "autotest-per-patch" build number 132334 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.045 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.046 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.051 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.100 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.352 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.352 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.868 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.882 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.896 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.896 > git config core.sparsecheckout # timeout=10 00:00:03.908 > git read-tree -mu HEAD # timeout=10 00:00:03.924 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.944 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.945 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.025 [Pipeline] Start of Pipeline 00:00:04.037 [Pipeline] library 00:00:04.039 Loading library shm_lib@master 00:00:04.039 Library shm_lib@master is cached. Copying from home. 00:00:04.062 [Pipeline] node 00:00:04.073 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.075 [Pipeline] { 00:00:04.085 [Pipeline] catchError 00:00:04.087 [Pipeline] { 00:00:04.099 [Pipeline] wrap 00:00:04.107 [Pipeline] { 00:00:04.115 [Pipeline] stage 00:00:04.117 [Pipeline] { (Prologue) 00:00:04.331 [Pipeline] sh 00:00:04.676 + logger -p user.info -t JENKINS-CI 00:00:04.699 [Pipeline] echo 00:00:04.701 Node: CYP13 00:00:04.708 [Pipeline] sh 00:00:05.017 [Pipeline] setCustomBuildProperty 00:00:05.031 [Pipeline] echo 00:00:05.033 Cleanup processes 00:00:05.040 [Pipeline] sh 00:00:05.330 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.330 2382288 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.344 [Pipeline] sh 00:00:05.631 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.631 ++ grep -v 'sudo pgrep' 00:00:05.631 ++ awk '{print $1}' 00:00:05.631 + sudo kill -9 00:00:05.631 + true 00:00:05.646 [Pipeline] cleanWs 00:00:05.657 [WS-CLEANUP] Deleting project workspace... 00:00:05.657 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.663 [WS-CLEANUP] done 00:00:05.668 [Pipeline] setCustomBuildProperty 00:00:05.681 [Pipeline] sh 00:00:05.965 + sudo git config --global --replace-all safe.directory '*' 00:00:06.052 [Pipeline] httpRequest 00:00:06.539 [Pipeline] echo 00:00:06.541 Sorcerer 10.211.164.20 is alive 00:00:06.550 [Pipeline] retry 00:00:06.553 [Pipeline] { 00:00:06.567 [Pipeline] httpRequest 00:00:06.572 HttpMethod: GET 00:00:06.572 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.573 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.581 Response Code: HTTP/1.1 200 OK 00:00:06.581 Success: Status code 200 is in the accepted range: 200,404 00:00:06.581 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.341 [Pipeline] } 00:00:29.358 [Pipeline] // retry 00:00:29.366 [Pipeline] sh 00:00:29.656 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.672 [Pipeline] httpRequest 00:00:30.127 [Pipeline] echo 00:00:30.129 Sorcerer 10.211.164.20 is alive 00:00:30.139 [Pipeline] retry 00:00:30.141 [Pipeline] { 00:00:30.154 [Pipeline] httpRequest 00:00:30.159 HttpMethod: GET 00:00:30.159 URL: http://10.211.164.20/packages/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:00:30.160 Sending request to url: http://10.211.164.20/packages/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:00:30.166 Response Code: HTTP/1.1 200 OK 00:00:30.167 Success: Status code 200 is in the accepted range: 200,404 00:00:30.167 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:05:40.440 [Pipeline] } 00:05:40.456 [Pipeline] // retry 00:05:40.463 [Pipeline] sh 00:05:40.751 + tar --no-same-owner -xf spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:05:44.061 [Pipeline] sh 00:05:44.350 + git -C spdk log --oneline -n5 00:05:44.350 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:05:44.350 3b58329b1 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:05:44.350 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:05:44.350 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:05:44.350 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:05:44.362 [Pipeline] } 00:05:44.378 [Pipeline] // stage 00:05:44.388 [Pipeline] stage 00:05:44.390 [Pipeline] { (Prepare) 00:05:44.407 [Pipeline] writeFile 00:05:44.424 [Pipeline] sh 00:05:44.715 + logger -p user.info -t JENKINS-CI 00:05:44.729 [Pipeline] sh 00:05:45.016 + logger -p user.info -t JENKINS-CI 00:05:45.029 [Pipeline] sh 00:05:45.320 + cat autorun-spdk.conf 00:05:45.320 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:45.320 SPDK_TEST_NVMF=1 00:05:45.320 SPDK_TEST_NVME_CLI=1 00:05:45.320 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:45.320 SPDK_TEST_NVMF_NICS=e810 00:05:45.320 SPDK_TEST_VFIOUSER=1 00:05:45.320 SPDK_RUN_UBSAN=1 00:05:45.320 NET_TYPE=phy 00:05:45.328 RUN_NIGHTLY=0 00:05:45.332 [Pipeline] readFile 00:05:45.358 [Pipeline] withEnv 00:05:45.360 [Pipeline] { 00:05:45.373 [Pipeline] sh 00:05:45.662 + set -ex 00:05:45.662 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:45.662 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:45.662 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:45.662 ++ SPDK_TEST_NVMF=1 00:05:45.662 ++ SPDK_TEST_NVME_CLI=1 00:05:45.662 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:45.662 ++ SPDK_TEST_NVMF_NICS=e810 00:05:45.662 ++ SPDK_TEST_VFIOUSER=1 00:05:45.662 ++ SPDK_RUN_UBSAN=1 00:05:45.662 ++ NET_TYPE=phy 00:05:45.662 ++ RUN_NIGHTLY=0 00:05:45.662 + case $SPDK_TEST_NVMF_NICS in 00:05:45.662 + DRIVERS=ice 00:05:45.662 + [[ tcp == \r\d\m\a ]] 00:05:45.662 + [[ -n ice ]] 00:05:45.662 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:45.662 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:45.662 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:05:45.662 rmmod: ERROR: Module irdma is not currently loaded 00:05:45.662 rmmod: ERROR: Module i40iw is not currently loaded 00:05:45.662 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:45.662 + true 00:05:45.662 + for D in $DRIVERS 00:05:45.662 + sudo modprobe ice 00:05:45.662 + exit 0 00:05:45.672 [Pipeline] } 00:05:45.686 [Pipeline] // withEnv 00:05:45.691 [Pipeline] } 00:05:45.706 [Pipeline] // stage 00:05:45.718 [Pipeline] catchError 00:05:45.720 [Pipeline] { 00:05:45.734 [Pipeline] timeout 00:05:45.735 Timeout set to expire in 1 hr 0 min 00:05:45.736 [Pipeline] { 00:05:45.750 [Pipeline] stage 00:05:45.752 [Pipeline] { (Tests) 00:05:45.766 [Pipeline] sh 00:05:46.053 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:46.053 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:46.053 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:46.053 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:46.053 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.053 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:46.053 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:46.053 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:46.053 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:46.053 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:46.053 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:46.053 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:46.053 + source /etc/os-release 00:05:46.053 ++ NAME='Fedora Linux' 00:05:46.053 ++ VERSION='39 (Cloud Edition)' 00:05:46.053 ++ ID=fedora 00:05:46.053 ++ VERSION_ID=39 00:05:46.053 ++ VERSION_CODENAME= 00:05:46.053 ++ PLATFORM_ID=platform:f39 00:05:46.053 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:46.053 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:46.053 ++ LOGO=fedora-logo-icon 00:05:46.053 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:46.053 ++ HOME_URL=https://fedoraproject.org/ 00:05:46.053 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:46.053 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:46.053 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:46.053 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:46.053 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:46.053 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:46.053 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:46.053 ++ SUPPORT_END=2024-11-12 00:05:46.053 ++ VARIANT='Cloud Edition' 00:05:46.053 ++ VARIANT_ID=cloud 00:05:46.053 + uname -a 00:05:46.053 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:46.053 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:49.353 Hugepages 00:05:49.353 node hugesize free / total 00:05:49.353 node0 1048576kB 0 / 0 00:05:49.353 node0 2048kB 0 / 0 00:05:49.353 node1 1048576kB 0 / 0 00:05:49.353 node1 2048kB 0 / 0 00:05:49.353 00:05:49.353 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:49.353 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:49.353 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:49.353 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:49.353 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:49.353 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:49.353 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:49.353 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:49.353 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:49.353 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:49.353 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:49.353 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:49.353 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:49.353 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:49.353 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:49.353 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:49.353 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:49.353 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:49.353 + rm -f /tmp/spdk-ld-path 00:05:49.353 + source autorun-spdk.conf 00:05:49.353 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:49.353 ++ SPDK_TEST_NVMF=1 00:05:49.353 ++ SPDK_TEST_NVME_CLI=1 00:05:49.353 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:49.353 ++ SPDK_TEST_NVMF_NICS=e810 00:05:49.353 ++ SPDK_TEST_VFIOUSER=1 00:05:49.353 ++ SPDK_RUN_UBSAN=1 00:05:49.353 ++ NET_TYPE=phy 00:05:49.353 ++ RUN_NIGHTLY=0 00:05:49.353 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:49.353 + [[ -n '' ]] 00:05:49.353 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:49.353 + for M in /var/spdk/build-*-manifest.txt 00:05:49.353 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:49.353 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:49.353 + for M in /var/spdk/build-*-manifest.txt 00:05:49.353 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:49.353 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:49.353 + for M in /var/spdk/build-*-manifest.txt 00:05:49.353 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:49.353 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:49.353 ++ uname 00:05:49.353 + [[ Linux == \L\i\n\u\x ]] 00:05:49.353 + sudo dmesg -T 00:05:49.353 + sudo dmesg --clear 00:05:49.353 + dmesg_pid=2384446 00:05:49.353 + [[ Fedora Linux == FreeBSD ]] 00:05:49.353 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.353 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.353 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:49.353 + [[ -x /usr/src/fio-static/fio ]] 00:05:49.353 + export FIO_BIN=/usr/src/fio-static/fio 00:05:49.353 + FIO_BIN=/usr/src/fio-static/fio 00:05:49.353 + sudo dmesg -Tw 00:05:49.353 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:49.353 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:49.353 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:49.353 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:49.353 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:49.353 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:49.353 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:49.353 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:49.353 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:49.353 06:17:09 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:49.353 06:17:09 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:49.353 06:17:09 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:49.353 06:17:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:49.353 06:17:09 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:49.615 06:17:09 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:49.615 06:17:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.615 06:17:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:49.615 06:17:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:49.615 06:17:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.615 06:17:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.615 06:17:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.615 06:17:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.615 06:17:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.615 06:17:09 -- paths/export.sh@5 -- $ export PATH 00:05:49.615 06:17:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.615 06:17:09 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:49.615 06:17:09 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:49.615 06:17:09 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732079829.XXXXXX 00:05:49.615 06:17:09 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732079829.vXzMEK 00:05:49.615 06:17:09 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:49.615 06:17:09 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:49.615 06:17:09 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:49.615 06:17:09 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:49.615 06:17:09 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:49.615 06:17:09 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:49.615 06:17:09 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:49.615 06:17:09 -- common/autotest_common.sh@10 -- $ set +x 00:05:49.615 06:17:09 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:49.615 06:17:09 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:49.615 06:17:09 -- pm/common@17 -- $ local monitor 00:05:49.615 06:17:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:49.615 06:17:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:49.615 06:17:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:49.615 06:17:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:49.615 06:17:09 -- pm/common@21 -- $ date +%s 00:05:49.615 06:17:09 -- pm/common@25 -- $ sleep 1 00:05:49.615 06:17:09 -- pm/common@21 -- $ date +%s 00:05:49.615 06:17:09 -- pm/common@21 -- $ date +%s 00:05:49.615 06:17:09 -- pm/common@21 -- $ date +%s 00:05:49.615 06:17:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079829 00:05:49.615 06:17:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079829 00:05:49.615 06:17:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079829 00:05:49.615 06:17:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079829 00:05:49.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079829_collect-cpu-load.pm.log 00:05:49.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079829_collect-vmstat.pm.log 00:05:49.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079829_collect-cpu-temp.pm.log 00:05:49.615 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079829_collect-bmc-pm.bmc.pm.log 00:05:50.559 06:17:10 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:50.559 06:17:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:50.559 06:17:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:50.559 06:17:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:50.559 06:17:10 -- spdk/autobuild.sh@16 -- $ date -u 00:05:50.559 Wed Nov 20 05:17:10 AM UTC 2024 00:05:50.559 06:17:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:50.559 v25.01-pre-192-g57b682926 00:05:50.559 06:17:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:50.559 06:17:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:50.559 06:17:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:50.559 06:17:10 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:50.559 06:17:10 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:50.559 06:17:10 -- common/autotest_common.sh@10 -- $ set +x 00:05:50.559 ************************************ 00:05:50.559 START TEST ubsan 00:05:50.559 ************************************ 00:05:50.559 06:17:10 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:05:50.559 using ubsan 00:05:50.559 00:05:50.559 real 0m0.001s 00:05:50.559 user 0m0.000s 00:05:50.559 sys 0m0.000s 00:05:50.559 06:17:10 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:50.559 06:17:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:50.559 ************************************ 00:05:50.559 END TEST ubsan 00:05:50.559 ************************************ 00:05:50.559 06:17:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:50.559 06:17:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:50.559 06:17:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:50.559 06:17:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:50.559 06:17:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:50.559 06:17:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:50.559 06:17:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:50.559 06:17:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:50.559 06:17:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:50.820 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:50.820 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:51.081 Using 'verbs' RDMA provider 00:06:06.944 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:19.238 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:19.759 Creating mk/config.mk...done. 00:06:19.759 Creating mk/cc.flags.mk...done. 00:06:19.759 Type 'make' to build. 00:06:19.759 06:17:39 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:06:19.759 06:17:39 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:19.759 06:17:39 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:19.759 06:17:39 -- common/autotest_common.sh@10 -- $ set +x 00:06:20.020 ************************************ 00:06:20.020 START TEST make 00:06:20.020 ************************************ 00:06:20.020 06:17:39 make -- common/autotest_common.sh@1127 -- $ make -j144 00:06:20.280 make[1]: Nothing to be done for 'all'. 00:06:21.665 The Meson build system 00:06:21.665 Version: 1.5.0 00:06:21.665 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:21.665 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:21.665 Build type: native build 00:06:21.665 Project name: libvfio-user 00:06:21.665 Project version: 0.0.1 00:06:21.665 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:21.665 C linker for the host machine: cc ld.bfd 2.40-14 00:06:21.665 Host machine cpu family: x86_64 00:06:21.665 Host machine cpu: x86_64 00:06:21.665 Run-time dependency threads found: YES 00:06:21.665 Library dl found: YES 00:06:21.665 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:21.665 Run-time dependency json-c found: YES 0.17 00:06:21.665 Run-time dependency cmocka found: YES 1.1.7 00:06:21.665 Program pytest-3 found: NO 00:06:21.665 Program flake8 found: NO 00:06:21.665 Program misspell-fixer found: NO 00:06:21.665 Program restructuredtext-lint found: NO 00:06:21.665 Program valgrind found: YES (/usr/bin/valgrind) 00:06:21.665 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:21.665 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:21.665 Compiler for C supports arguments -Wwrite-strings: YES 00:06:21.665 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:21.665 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:21.665 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:21.665 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:21.665 Build targets in project: 8 00:06:21.665 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:21.665 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:21.665 00:06:21.665 libvfio-user 0.0.1 00:06:21.665 00:06:21.665 User defined options 00:06:21.665 buildtype : debug 00:06:21.665 default_library: shared 00:06:21.665 libdir : /usr/local/lib 00:06:21.665 00:06:21.665 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:22.237 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:22.237 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:22.237 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:22.237 [3/37] Compiling C object samples/null.p/null.c.o 00:06:22.237 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:22.237 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:22.237 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:22.237 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:22.237 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:22.237 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:22.497 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:22.497 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:22.497 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:22.497 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:22.497 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:22.497 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:22.497 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:22.497 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:22.497 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:22.497 [19/37] Compiling C object samples/server.p/server.c.o 00:06:22.497 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:22.497 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:22.497 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:22.497 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:22.497 [24/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:22.497 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:22.497 [26/37] Compiling C object samples/client.p/client.c.o 00:06:22.497 [27/37] Linking target samples/client 00:06:22.497 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:22.497 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:22.497 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:06:22.497 [31/37] Linking target test/unit_tests 00:06:22.759 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:22.759 [33/37] Linking target samples/gpio-pci-idio-16 00:06:22.759 [34/37] Linking target samples/server 00:06:22.759 [35/37] Linking target samples/null 00:06:22.759 [36/37] Linking target samples/lspci 00:06:22.759 [37/37] Linking target samples/shadow_ioeventfd_server 00:06:22.759 INFO: autodetecting backend as ninja 00:06:22.759 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:22.759 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:23.332 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:23.332 ninja: no work to do. 00:06:29.928 The Meson build system 00:06:29.929 Version: 1.5.0 00:06:29.929 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:06:29.929 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:06:29.929 Build type: native build 00:06:29.929 Program cat found: YES (/usr/bin/cat) 00:06:29.929 Project name: DPDK 00:06:29.929 Project version: 24.03.0 00:06:29.929 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:29.929 C linker for the host machine: cc ld.bfd 2.40-14 00:06:29.929 Host machine cpu family: x86_64 00:06:29.929 Host machine cpu: x86_64 00:06:29.929 Message: ## Building in Developer Mode ## 00:06:29.929 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:29.929 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:06:29.929 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:29.929 Program python3 found: YES (/usr/bin/python3) 00:06:29.929 Program cat found: YES (/usr/bin/cat) 00:06:29.929 Compiler for C supports arguments -march=native: YES 00:06:29.929 Checking for size of "void *" : 8 00:06:29.929 Checking for size of "void *" : 8 (cached) 00:06:29.929 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:29.929 Library m found: YES 00:06:29.929 Library numa found: YES 00:06:29.929 Has header "numaif.h" : YES 00:06:29.929 Library fdt found: NO 00:06:29.929 Library execinfo found: NO 00:06:29.929 Has header "execinfo.h" : YES 00:06:29.929 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:29.929 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:29.929 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:29.929 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:29.929 Run-time dependency openssl found: YES 3.1.1 00:06:29.929 Run-time dependency libpcap found: YES 1.10.4 00:06:29.929 Has header "pcap.h" with dependency libpcap: YES 00:06:29.929 Compiler for C supports arguments -Wcast-qual: YES 00:06:29.929 Compiler for C supports arguments -Wdeprecated: YES 00:06:29.929 Compiler for C supports arguments -Wformat: YES 00:06:29.929 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:29.929 Compiler for C supports arguments -Wformat-security: NO 00:06:29.929 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:29.929 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:29.929 Compiler for C supports arguments -Wnested-externs: YES 00:06:29.929 Compiler for C supports arguments -Wold-style-definition: YES 00:06:29.929 Compiler for C supports arguments -Wpointer-arith: YES 00:06:29.929 Compiler for C supports arguments -Wsign-compare: YES 00:06:29.929 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:29.929 Compiler for C supports arguments -Wundef: YES 00:06:29.929 Compiler for C supports arguments -Wwrite-strings: YES 00:06:29.929 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:29.929 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:29.929 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:29.929 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:29.929 Program objdump found: YES (/usr/bin/objdump) 00:06:29.929 Compiler for C supports arguments -mavx512f: YES 00:06:29.929 Checking if "AVX512 checking" compiles: YES 00:06:29.929 Fetching value of define "__SSE4_2__" : 1 00:06:29.929 Fetching value of define "__AES__" : 1 00:06:29.929 Fetching value of define "__AVX__" : 1 00:06:29.929 Fetching value of define "__AVX2__" : 1 00:06:29.929 Fetching value of define "__AVX512BW__" : 1 00:06:29.929 Fetching value of define "__AVX512CD__" : 1 00:06:29.929 Fetching value of define "__AVX512DQ__" : 1 00:06:29.929 Fetching value of define "__AVX512F__" : 1 00:06:29.929 Fetching value of define "__AVX512VL__" : 1 00:06:29.929 Fetching value of define "__PCLMUL__" : 1 00:06:29.929 Fetching value of define "__RDRND__" : 1 00:06:29.929 Fetching value of define "__RDSEED__" : 1 00:06:29.929 Fetching value of define "__VPCLMULQDQ__" : 1 00:06:29.929 Fetching value of define "__znver1__" : (undefined) 00:06:29.929 Fetching value of define "__znver2__" : (undefined) 00:06:29.929 Fetching value of define "__znver3__" : (undefined) 00:06:29.929 Fetching value of define "__znver4__" : (undefined) 00:06:29.929 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:29.929 Message: lib/log: Defining dependency "log" 00:06:29.929 Message: lib/kvargs: Defining dependency "kvargs" 00:06:29.929 Message: lib/telemetry: Defining dependency "telemetry" 00:06:29.929 Checking for function "getentropy" : NO 00:06:29.929 Message: lib/eal: Defining dependency "eal" 00:06:29.929 Message: lib/ring: Defining dependency "ring" 00:06:29.929 Message: lib/rcu: Defining dependency "rcu" 00:06:29.929 Message: lib/mempool: Defining dependency "mempool" 00:06:29.929 Message: lib/mbuf: Defining dependency "mbuf" 00:06:29.929 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:29.929 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:29.929 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:29.929 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:29.929 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:29.929 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:06:29.929 Compiler for C supports arguments -mpclmul: YES 00:06:29.929 Compiler for C supports arguments -maes: YES 00:06:29.929 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:29.929 Compiler for C supports arguments -mavx512bw: YES 00:06:29.929 Compiler for C supports arguments -mavx512dq: YES 00:06:29.929 Compiler for C supports arguments -mavx512vl: YES 00:06:29.929 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:29.929 Compiler for C supports arguments -mavx2: YES 00:06:29.929 Compiler for C supports arguments -mavx: YES 00:06:29.929 Message: lib/net: Defining dependency "net" 00:06:29.929 Message: lib/meter: Defining dependency "meter" 00:06:29.929 Message: lib/ethdev: Defining dependency "ethdev" 00:06:29.929 Message: lib/pci: Defining dependency "pci" 00:06:29.929 Message: lib/cmdline: Defining dependency "cmdline" 00:06:29.929 Message: lib/hash: Defining dependency "hash" 00:06:29.929 Message: lib/timer: Defining dependency "timer" 00:06:29.929 Message: lib/compressdev: Defining dependency "compressdev" 00:06:29.929 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:29.929 Message: lib/dmadev: Defining dependency "dmadev" 00:06:29.929 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:29.929 Message: lib/power: Defining dependency "power" 00:06:29.929 Message: lib/reorder: Defining dependency "reorder" 00:06:29.929 Message: lib/security: Defining dependency "security" 00:06:29.929 Has header "linux/userfaultfd.h" : YES 00:06:29.929 Has header "linux/vduse.h" : YES 00:06:29.929 Message: lib/vhost: Defining dependency "vhost" 00:06:29.929 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:29.929 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:29.929 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:29.929 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:29.929 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:29.929 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:29.929 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:29.929 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:29.929 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:29.929 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:29.929 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:29.929 Configuring doxy-api-html.conf using configuration 00:06:29.929 Configuring doxy-api-man.conf using configuration 00:06:29.929 Program mandb found: YES (/usr/bin/mandb) 00:06:29.929 Program sphinx-build found: NO 00:06:29.929 Configuring rte_build_config.h using configuration 00:06:29.929 Message: 00:06:29.929 ================= 00:06:29.929 Applications Enabled 00:06:29.929 ================= 00:06:29.929 00:06:29.929 apps: 00:06:29.929 00:06:29.929 00:06:29.929 Message: 00:06:29.929 ================= 00:06:29.929 Libraries Enabled 00:06:29.929 ================= 00:06:29.929 00:06:29.929 libs: 00:06:29.929 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:29.929 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:29.929 cryptodev, dmadev, power, reorder, security, vhost, 00:06:29.929 00:06:29.929 Message: 00:06:29.929 =============== 00:06:29.929 Drivers Enabled 00:06:29.929 =============== 00:06:29.929 00:06:29.929 common: 00:06:29.929 00:06:29.929 bus: 00:06:29.929 pci, vdev, 00:06:29.929 mempool: 00:06:29.929 ring, 00:06:29.929 dma: 00:06:29.929 00:06:29.929 net: 00:06:29.929 00:06:29.929 crypto: 00:06:29.929 00:06:29.929 compress: 00:06:29.929 00:06:29.929 vdpa: 00:06:29.929 00:06:29.929 00:06:29.929 Message: 00:06:29.929 ================= 00:06:29.929 Content Skipped 00:06:29.929 ================= 00:06:29.929 00:06:29.929 apps: 00:06:29.929 dumpcap: explicitly disabled via build config 00:06:29.929 graph: explicitly disabled via build config 00:06:29.929 pdump: explicitly disabled via build config 00:06:29.929 proc-info: explicitly disabled via build config 00:06:29.929 test-acl: explicitly disabled via build config 00:06:29.929 test-bbdev: explicitly disabled via build config 00:06:29.929 test-cmdline: explicitly disabled via build config 00:06:29.929 test-compress-perf: explicitly disabled via build config 00:06:29.929 test-crypto-perf: explicitly disabled via build config 00:06:29.929 test-dma-perf: explicitly disabled via build config 00:06:29.929 test-eventdev: explicitly disabled via build config 00:06:29.929 test-fib: explicitly disabled via build config 00:06:29.929 test-flow-perf: explicitly disabled via build config 00:06:29.929 test-gpudev: explicitly disabled via build config 00:06:29.929 test-mldev: explicitly disabled via build config 00:06:29.929 test-pipeline: explicitly disabled via build config 00:06:29.929 test-pmd: explicitly disabled via build config 00:06:29.930 test-regex: explicitly disabled via build config 00:06:29.930 test-sad: explicitly disabled via build config 00:06:29.930 test-security-perf: explicitly disabled via build config 00:06:29.930 00:06:29.930 libs: 00:06:29.930 argparse: explicitly disabled via build config 00:06:29.930 metrics: explicitly disabled via build config 00:06:29.930 acl: explicitly disabled via build config 00:06:29.930 bbdev: explicitly disabled via build config 00:06:29.930 bitratestats: explicitly disabled via build config 00:06:29.930 bpf: explicitly disabled via build config 00:06:29.930 cfgfile: explicitly disabled via build config 00:06:29.930 distributor: explicitly disabled via build config 00:06:29.930 efd: explicitly disabled via build config 00:06:29.930 eventdev: explicitly disabled via build config 00:06:29.930 dispatcher: explicitly disabled via build config 00:06:29.930 gpudev: explicitly disabled via build config 00:06:29.930 gro: explicitly disabled via build config 00:06:29.930 gso: explicitly disabled via build config 00:06:29.930 ip_frag: explicitly disabled via build config 00:06:29.930 jobstats: explicitly disabled via build config 00:06:29.930 latencystats: explicitly disabled via build config 00:06:29.930 lpm: explicitly disabled via build config 00:06:29.930 member: explicitly disabled via build config 00:06:29.930 pcapng: explicitly disabled via build config 00:06:29.930 rawdev: explicitly disabled via build config 00:06:29.930 regexdev: explicitly disabled via build config 00:06:29.930 mldev: explicitly disabled via build config 00:06:29.930 rib: explicitly disabled via build config 00:06:29.930 sched: explicitly disabled via build config 00:06:29.930 stack: explicitly disabled via build config 00:06:29.930 ipsec: explicitly disabled via build config 00:06:29.930 pdcp: explicitly disabled via build config 00:06:29.930 fib: explicitly disabled via build config 00:06:29.930 port: explicitly disabled via build config 00:06:29.930 pdump: explicitly disabled via build config 00:06:29.930 table: explicitly disabled via build config 00:06:29.930 pipeline: explicitly disabled via build config 00:06:29.930 graph: explicitly disabled via build config 00:06:29.930 node: explicitly disabled via build config 00:06:29.930 00:06:29.930 drivers: 00:06:29.930 common/cpt: not in enabled drivers build config 00:06:29.930 common/dpaax: not in enabled drivers build config 00:06:29.930 common/iavf: not in enabled drivers build config 00:06:29.930 common/idpf: not in enabled drivers build config 00:06:29.930 common/ionic: not in enabled drivers build config 00:06:29.930 common/mvep: not in enabled drivers build config 00:06:29.930 common/octeontx: not in enabled drivers build config 00:06:29.930 bus/auxiliary: not in enabled drivers build config 00:06:29.930 bus/cdx: not in enabled drivers build config 00:06:29.930 bus/dpaa: not in enabled drivers build config 00:06:29.930 bus/fslmc: not in enabled drivers build config 00:06:29.930 bus/ifpga: not in enabled drivers build config 00:06:29.930 bus/platform: not in enabled drivers build config 00:06:29.930 bus/uacce: not in enabled drivers build config 00:06:29.930 bus/vmbus: not in enabled drivers build config 00:06:29.930 common/cnxk: not in enabled drivers build config 00:06:29.930 common/mlx5: not in enabled drivers build config 00:06:29.930 common/nfp: not in enabled drivers build config 00:06:29.930 common/nitrox: not in enabled drivers build config 00:06:29.930 common/qat: not in enabled drivers build config 00:06:29.930 common/sfc_efx: not in enabled drivers build config 00:06:29.930 mempool/bucket: not in enabled drivers build config 00:06:29.930 mempool/cnxk: not in enabled drivers build config 00:06:29.930 mempool/dpaa: not in enabled drivers build config 00:06:29.930 mempool/dpaa2: not in enabled drivers build config 00:06:29.930 mempool/octeontx: not in enabled drivers build config 00:06:29.930 mempool/stack: not in enabled drivers build config 00:06:29.930 dma/cnxk: not in enabled drivers build config 00:06:29.930 dma/dpaa: not in enabled drivers build config 00:06:29.930 dma/dpaa2: not in enabled drivers build config 00:06:29.930 dma/hisilicon: not in enabled drivers build config 00:06:29.930 dma/idxd: not in enabled drivers build config 00:06:29.930 dma/ioat: not in enabled drivers build config 00:06:29.930 dma/skeleton: not in enabled drivers build config 00:06:29.930 net/af_packet: not in enabled drivers build config 00:06:29.930 net/af_xdp: not in enabled drivers build config 00:06:29.930 net/ark: not in enabled drivers build config 00:06:29.930 net/atlantic: not in enabled drivers build config 00:06:29.930 net/avp: not in enabled drivers build config 00:06:29.930 net/axgbe: not in enabled drivers build config 00:06:29.930 net/bnx2x: not in enabled drivers build config 00:06:29.930 net/bnxt: not in enabled drivers build config 00:06:29.930 net/bonding: not in enabled drivers build config 00:06:29.930 net/cnxk: not in enabled drivers build config 00:06:29.930 net/cpfl: not in enabled drivers build config 00:06:29.930 net/cxgbe: not in enabled drivers build config 00:06:29.930 net/dpaa: not in enabled drivers build config 00:06:29.930 net/dpaa2: not in enabled drivers build config 00:06:29.930 net/e1000: not in enabled drivers build config 00:06:29.930 net/ena: not in enabled drivers build config 00:06:29.930 net/enetc: not in enabled drivers build config 00:06:29.930 net/enetfec: not in enabled drivers build config 00:06:29.930 net/enic: not in enabled drivers build config 00:06:29.930 net/failsafe: not in enabled drivers build config 00:06:29.930 net/fm10k: not in enabled drivers build config 00:06:29.930 net/gve: not in enabled drivers build config 00:06:29.930 net/hinic: not in enabled drivers build config 00:06:29.930 net/hns3: not in enabled drivers build config 00:06:29.930 net/i40e: not in enabled drivers build config 00:06:29.930 net/iavf: not in enabled drivers build config 00:06:29.930 net/ice: not in enabled drivers build config 00:06:29.930 net/idpf: not in enabled drivers build config 00:06:29.930 net/igc: not in enabled drivers build config 00:06:29.930 net/ionic: not in enabled drivers build config 00:06:29.930 net/ipn3ke: not in enabled drivers build config 00:06:29.930 net/ixgbe: not in enabled drivers build config 00:06:29.930 net/mana: not in enabled drivers build config 00:06:29.930 net/memif: not in enabled drivers build config 00:06:29.930 net/mlx4: not in enabled drivers build config 00:06:29.930 net/mlx5: not in enabled drivers build config 00:06:29.930 net/mvneta: not in enabled drivers build config 00:06:29.930 net/mvpp2: not in enabled drivers build config 00:06:29.930 net/netvsc: not in enabled drivers build config 00:06:29.930 net/nfb: not in enabled drivers build config 00:06:29.930 net/nfp: not in enabled drivers build config 00:06:29.930 net/ngbe: not in enabled drivers build config 00:06:29.930 net/null: not in enabled drivers build config 00:06:29.930 net/octeontx: not in enabled drivers build config 00:06:29.930 net/octeon_ep: not in enabled drivers build config 00:06:29.930 net/pcap: not in enabled drivers build config 00:06:29.930 net/pfe: not in enabled drivers build config 00:06:29.930 net/qede: not in enabled drivers build config 00:06:29.930 net/ring: not in enabled drivers build config 00:06:29.930 net/sfc: not in enabled drivers build config 00:06:29.930 net/softnic: not in enabled drivers build config 00:06:29.930 net/tap: not in enabled drivers build config 00:06:29.930 net/thunderx: not in enabled drivers build config 00:06:29.930 net/txgbe: not in enabled drivers build config 00:06:29.930 net/vdev_netvsc: not in enabled drivers build config 00:06:29.930 net/vhost: not in enabled drivers build config 00:06:29.930 net/virtio: not in enabled drivers build config 00:06:29.930 net/vmxnet3: not in enabled drivers build config 00:06:29.930 raw/*: missing internal dependency, "rawdev" 00:06:29.930 crypto/armv8: not in enabled drivers build config 00:06:29.930 crypto/bcmfs: not in enabled drivers build config 00:06:29.930 crypto/caam_jr: not in enabled drivers build config 00:06:29.930 crypto/ccp: not in enabled drivers build config 00:06:29.930 crypto/cnxk: not in enabled drivers build config 00:06:29.930 crypto/dpaa_sec: not in enabled drivers build config 00:06:29.930 crypto/dpaa2_sec: not in enabled drivers build config 00:06:29.930 crypto/ipsec_mb: not in enabled drivers build config 00:06:29.930 crypto/mlx5: not in enabled drivers build config 00:06:29.930 crypto/mvsam: not in enabled drivers build config 00:06:29.930 crypto/nitrox: not in enabled drivers build config 00:06:29.930 crypto/null: not in enabled drivers build config 00:06:29.930 crypto/octeontx: not in enabled drivers build config 00:06:29.930 crypto/openssl: not in enabled drivers build config 00:06:29.930 crypto/scheduler: not in enabled drivers build config 00:06:29.930 crypto/uadk: not in enabled drivers build config 00:06:29.930 crypto/virtio: not in enabled drivers build config 00:06:29.930 compress/isal: not in enabled drivers build config 00:06:29.930 compress/mlx5: not in enabled drivers build config 00:06:29.930 compress/nitrox: not in enabled drivers build config 00:06:29.930 compress/octeontx: not in enabled drivers build config 00:06:29.930 compress/zlib: not in enabled drivers build config 00:06:29.930 regex/*: missing internal dependency, "regexdev" 00:06:29.930 ml/*: missing internal dependency, "mldev" 00:06:29.930 vdpa/ifc: not in enabled drivers build config 00:06:29.930 vdpa/mlx5: not in enabled drivers build config 00:06:29.930 vdpa/nfp: not in enabled drivers build config 00:06:29.930 vdpa/sfc: not in enabled drivers build config 00:06:29.930 event/*: missing internal dependency, "eventdev" 00:06:29.930 baseband/*: missing internal dependency, "bbdev" 00:06:29.930 gpu/*: missing internal dependency, "gpudev" 00:06:29.930 00:06:29.930 00:06:29.930 Build targets in project: 84 00:06:29.930 00:06:29.930 DPDK 24.03.0 00:06:29.930 00:06:29.930 User defined options 00:06:29.930 buildtype : debug 00:06:29.930 default_library : shared 00:06:29.930 libdir : lib 00:06:29.930 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:29.930 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:29.930 c_link_args : 00:06:29.930 cpu_instruction_set: native 00:06:29.930 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:06:29.930 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:06:29.930 enable_docs : false 00:06:29.930 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:29.931 enable_kmods : false 00:06:29.931 max_lcores : 128 00:06:29.931 tests : false 00:06:29.931 00:06:29.931 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:29.931 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:06:29.931 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:29.931 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:29.931 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:29.931 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:29.931 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:29.931 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:29.931 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:29.931 [8/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:29.931 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:29.931 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:29.931 [11/267] Linking static target lib/librte_kvargs.a 00:06:29.931 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:29.931 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:29.931 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:29.931 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:29.931 [16/267] Linking static target lib/librte_log.a 00:06:29.931 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:29.931 [18/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:29.931 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:29.931 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:29.931 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:29.931 [22/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:29.931 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:29.931 [24/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:29.931 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:29.931 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:29.931 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:29.931 [28/267] Linking static target lib/librte_pci.a 00:06:29.931 [29/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:29.931 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:29.931 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:29.931 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:29.931 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:29.931 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:29.931 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:30.190 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:30.190 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:30.190 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:30.190 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:30.190 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.190 [41/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:06:30.190 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:30.190 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:30.190 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:30.190 [45/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:30.190 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:30.190 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.190 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:30.190 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:30.190 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:30.190 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:30.190 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:30.190 [53/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:30.190 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:30.190 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:30.190 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:30.190 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:30.190 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:30.190 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:30.190 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:30.190 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:30.190 [62/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:30.190 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:30.190 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:30.190 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:30.190 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:30.190 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:30.190 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:30.190 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:30.190 [70/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:30.190 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:30.450 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:30.450 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:30.450 [74/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:30.450 [75/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:30.450 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:30.450 [77/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:30.450 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:30.450 [79/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:30.450 [80/267] Linking static target lib/librte_meter.a 00:06:30.450 [81/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:30.450 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:30.450 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:30.450 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:30.450 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:30.450 [86/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:30.450 [87/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:30.450 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:30.450 [89/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:30.450 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:30.450 [91/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:30.450 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:30.450 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:30.450 [94/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:30.450 [95/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:30.450 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:30.450 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:30.450 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:30.450 [99/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:30.450 [100/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:30.450 [101/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:30.450 [102/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:30.450 [103/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:30.450 [104/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:30.450 [105/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:30.450 [106/267] Linking static target lib/librte_telemetry.a 00:06:30.450 [107/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:30.450 [108/267] Linking static target lib/librte_ring.a 00:06:30.450 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:30.450 [110/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:30.450 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:30.450 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:30.450 [113/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:30.450 [114/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:30.450 [115/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:30.450 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:30.450 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:30.450 [118/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:30.450 [119/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:30.450 [120/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.450 [121/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:30.450 [122/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:30.450 [123/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:30.450 [124/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:30.450 [125/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:30.450 [126/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:30.450 [127/267] Linking static target lib/librte_mempool.a 00:06:30.450 [128/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:30.450 [129/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:30.450 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:30.450 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:30.450 [132/267] Linking static target lib/librte_timer.a 00:06:30.450 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:30.450 [134/267] Linking static target lib/librte_rcu.a 00:06:30.450 [135/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:30.450 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:30.450 [137/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:30.450 [138/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:30.450 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:30.450 [140/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:30.450 [141/267] Linking static target lib/librte_compressdev.a 00:06:30.450 [142/267] Linking target lib/librte_log.so.24.1 00:06:30.450 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:30.450 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:30.450 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:30.450 [146/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:30.450 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:30.450 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:30.450 [149/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:30.450 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:30.450 [151/267] Linking static target lib/librte_cmdline.a 00:06:30.450 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:30.450 [153/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:30.450 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:30.450 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:30.450 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:30.450 [157/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:30.450 [158/267] Linking static target lib/librte_net.a 00:06:30.450 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:30.450 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:30.450 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:30.450 [162/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:30.450 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:30.450 [164/267] Linking static target lib/librte_reorder.a 00:06:30.450 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:30.451 [166/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:30.451 [167/267] Linking static target lib/librte_dmadev.a 00:06:30.451 [168/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:30.451 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:30.451 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:30.451 [171/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:30.451 [172/267] Linking static target lib/librte_security.a 00:06:30.451 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:30.451 [174/267] Linking static target lib/librte_power.a 00:06:30.712 [175/267] Linking static target lib/librte_eal.a 00:06:30.712 [176/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:30.712 [177/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:30.712 [178/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:30.712 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.712 [180/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:30.712 [181/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:30.712 [182/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:30.712 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:30.712 [184/267] Linking target lib/librte_kvargs.so.24.1 00:06:30.712 [185/267] Linking static target lib/librte_mbuf.a 00:06:30.712 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:30.712 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:30.712 [188/267] Linking static target drivers/librte_bus_vdev.a 00:06:30.712 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:30.712 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:30.712 [191/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:30.712 [192/267] Linking static target lib/librte_hash.a 00:06:30.712 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.712 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:30.712 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:30.712 [196/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:30.973 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:30.973 [198/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:30.973 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:30.973 [200/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.973 [201/267] Linking static target drivers/librte_mempool_ring.a 00:06:30.973 [202/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.973 [203/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:30.973 [204/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:30.973 [205/267] Linking static target drivers/librte_bus_pci.a 00:06:30.973 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:30.973 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:30.973 [208/267] Linking static target lib/librte_cryptodev.a 00:06:30.973 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.973 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.973 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:30.973 [212/267] Linking target lib/librte_telemetry.so.24.1 00:06:31.235 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.235 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:31.235 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.235 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.235 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:31.235 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:31.235 [219/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.235 [220/267] Linking static target lib/librte_ethdev.a 00:06:31.496 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.496 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.496 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.758 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.758 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.758 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:32.330 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:32.330 [228/267] Linking static target lib/librte_vhost.a 00:06:33.274 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.217 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:40.803 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.187 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:42.187 [233/267] Linking target lib/librte_eal.so.24.1 00:06:42.448 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:42.448 [235/267] Linking target lib/librte_ring.so.24.1 00:06:42.448 [236/267] Linking target lib/librte_meter.so.24.1 00:06:42.448 [237/267] Linking target lib/librte_pci.so.24.1 00:06:42.448 [238/267] Linking target lib/librte_timer.so.24.1 00:06:42.448 [239/267] Linking target lib/librte_dmadev.so.24.1 00:06:42.448 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:06:42.448 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:42.448 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:42.448 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:42.448 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:42.708 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:42.708 [246/267] Linking target lib/librte_rcu.so.24.1 00:06:42.708 [247/267] Linking target lib/librte_mempool.so.24.1 00:06:42.708 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:06:42.708 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:42.708 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:42.708 [251/267] Linking target lib/librte_mbuf.so.24.1 00:06:42.708 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:06:42.970 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:42.970 [254/267] Linking target lib/librte_compressdev.so.24.1 00:06:42.970 [255/267] Linking target lib/librte_reorder.so.24.1 00:06:42.970 [256/267] Linking target lib/librte_net.so.24.1 00:06:42.970 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:06:43.232 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:43.232 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:43.232 [260/267] Linking target lib/librte_cmdline.so.24.1 00:06:43.232 [261/267] Linking target lib/librte_hash.so.24.1 00:06:43.232 [262/267] Linking target lib/librte_security.so.24.1 00:06:43.232 [263/267] Linking target lib/librte_ethdev.so.24.1 00:06:43.232 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:43.232 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:43.493 [266/267] Linking target lib/librte_power.so.24.1 00:06:43.493 [267/267] Linking target lib/librte_vhost.so.24.1 00:06:43.493 INFO: autodetecting backend as ninja 00:06:43.493 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:06:46.798 CC lib/log/log.o 00:06:46.798 CC lib/log/log_flags.o 00:06:46.798 CC lib/log/log_deprecated.o 00:06:46.798 CC lib/ut_mock/mock.o 00:06:46.798 CC lib/ut/ut.o 00:06:46.798 LIB libspdk_log.a 00:06:46.798 LIB libspdk_ut.a 00:06:46.798 LIB libspdk_ut_mock.a 00:06:46.798 SO libspdk_log.so.7.1 00:06:46.798 SO libspdk_ut.so.2.0 00:06:46.798 SO libspdk_ut_mock.so.6.0 00:06:46.798 SYMLINK libspdk_log.so 00:06:46.798 SYMLINK libspdk_ut_mock.so 00:06:46.798 SYMLINK libspdk_ut.so 00:06:47.058 CC lib/dma/dma.o 00:06:47.058 CC lib/util/base64.o 00:06:47.058 CXX lib/trace_parser/trace.o 00:06:47.058 CC lib/util/bit_array.o 00:06:47.058 CC lib/ioat/ioat.o 00:06:47.058 CC lib/util/cpuset.o 00:06:47.058 CC lib/util/crc16.o 00:06:47.058 CC lib/util/crc32.o 00:06:47.058 CC lib/util/crc32c.o 00:06:47.058 CC lib/util/crc32_ieee.o 00:06:47.058 CC lib/util/crc64.o 00:06:47.058 CC lib/util/dif.o 00:06:47.058 CC lib/util/fd.o 00:06:47.058 CC lib/util/fd_group.o 00:06:47.058 CC lib/util/file.o 00:06:47.058 CC lib/util/hexlify.o 00:06:47.058 CC lib/util/iov.o 00:06:47.058 CC lib/util/math.o 00:06:47.058 CC lib/util/net.o 00:06:47.058 CC lib/util/pipe.o 00:06:47.058 CC lib/util/strerror_tls.o 00:06:47.058 CC lib/util/string.o 00:06:47.058 CC lib/util/uuid.o 00:06:47.058 CC lib/util/xor.o 00:06:47.058 CC lib/util/zipf.o 00:06:47.058 CC lib/util/md5.o 00:06:47.058 CC lib/vfio_user/host/vfio_user_pci.o 00:06:47.058 CC lib/vfio_user/host/vfio_user.o 00:06:47.319 LIB libspdk_dma.a 00:06:47.319 SO libspdk_dma.so.5.0 00:06:47.319 LIB libspdk_ioat.a 00:06:47.319 SYMLINK libspdk_dma.so 00:06:47.319 SO libspdk_ioat.so.7.0 00:06:47.319 SYMLINK libspdk_ioat.so 00:06:47.319 LIB libspdk_vfio_user.a 00:06:47.581 SO libspdk_vfio_user.so.5.0 00:06:47.581 LIB libspdk_util.a 00:06:47.581 SYMLINK libspdk_vfio_user.so 00:06:47.581 SO libspdk_util.so.10.1 00:06:47.842 SYMLINK libspdk_util.so 00:06:47.842 LIB libspdk_trace_parser.a 00:06:47.842 SO libspdk_trace_parser.so.6.0 00:06:47.843 SYMLINK libspdk_trace_parser.so 00:06:48.107 CC lib/json/json_parse.o 00:06:48.107 CC lib/json/json_util.o 00:06:48.107 CC lib/vmd/vmd.o 00:06:48.107 CC lib/json/json_write.o 00:06:48.107 CC lib/conf/conf.o 00:06:48.107 CC lib/vmd/led.o 00:06:48.107 CC lib/idxd/idxd.o 00:06:48.107 CC lib/rdma_utils/rdma_utils.o 00:06:48.107 CC lib/env_dpdk/env.o 00:06:48.107 CC lib/idxd/idxd_user.o 00:06:48.107 CC lib/env_dpdk/memory.o 00:06:48.107 CC lib/idxd/idxd_kernel.o 00:06:48.107 CC lib/env_dpdk/pci.o 00:06:48.107 CC lib/env_dpdk/init.o 00:06:48.107 CC lib/env_dpdk/threads.o 00:06:48.107 CC lib/env_dpdk/pci_ioat.o 00:06:48.107 CC lib/env_dpdk/pci_virtio.o 00:06:48.107 CC lib/env_dpdk/pci_vmd.o 00:06:48.107 CC lib/env_dpdk/pci_idxd.o 00:06:48.107 CC lib/env_dpdk/pci_event.o 00:06:48.107 CC lib/env_dpdk/sigbus_handler.o 00:06:48.107 CC lib/env_dpdk/pci_dpdk.o 00:06:48.107 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:48.107 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:48.368 LIB libspdk_conf.a 00:06:48.368 SO libspdk_conf.so.6.0 00:06:48.368 LIB libspdk_rdma_utils.a 00:06:48.368 LIB libspdk_json.a 00:06:48.368 SO libspdk_rdma_utils.so.1.0 00:06:48.368 SO libspdk_json.so.6.0 00:06:48.368 SYMLINK libspdk_conf.so 00:06:48.631 SYMLINK libspdk_rdma_utils.so 00:06:48.631 SYMLINK libspdk_json.so 00:06:48.631 LIB libspdk_idxd.a 00:06:48.631 LIB libspdk_vmd.a 00:06:48.631 SO libspdk_idxd.so.12.1 00:06:48.631 SO libspdk_vmd.so.6.0 00:06:48.892 SYMLINK libspdk_idxd.so 00:06:48.892 SYMLINK libspdk_vmd.so 00:06:48.892 CC lib/rdma_provider/common.o 00:06:48.892 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:48.892 CC lib/jsonrpc/jsonrpc_server.o 00:06:48.892 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:48.892 CC lib/jsonrpc/jsonrpc_client.o 00:06:48.892 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:49.153 LIB libspdk_rdma_provider.a 00:06:49.153 LIB libspdk_jsonrpc.a 00:06:49.153 SO libspdk_rdma_provider.so.7.0 00:06:49.153 SO libspdk_jsonrpc.so.6.0 00:06:49.153 SYMLINK libspdk_rdma_provider.so 00:06:49.414 SYMLINK libspdk_jsonrpc.so 00:06:49.414 LIB libspdk_env_dpdk.a 00:06:49.414 SO libspdk_env_dpdk.so.15.1 00:06:49.674 SYMLINK libspdk_env_dpdk.so 00:06:49.674 CC lib/rpc/rpc.o 00:06:49.935 LIB libspdk_rpc.a 00:06:49.935 SO libspdk_rpc.so.6.0 00:06:49.935 SYMLINK libspdk_rpc.so 00:06:50.528 CC lib/trace/trace.o 00:06:50.528 CC lib/trace/trace_flags.o 00:06:50.528 CC lib/notify/notify.o 00:06:50.528 CC lib/trace/trace_rpc.o 00:06:50.528 CC lib/notify/notify_rpc.o 00:06:50.528 CC lib/keyring/keyring.o 00:06:50.528 CC lib/keyring/keyring_rpc.o 00:06:50.528 LIB libspdk_notify.a 00:06:50.528 SO libspdk_notify.so.6.0 00:06:50.528 LIB libspdk_keyring.a 00:06:50.528 LIB libspdk_trace.a 00:06:50.528 SO libspdk_keyring.so.2.0 00:06:50.790 SYMLINK libspdk_notify.so 00:06:50.790 SO libspdk_trace.so.11.0 00:06:50.790 SYMLINK libspdk_keyring.so 00:06:50.790 SYMLINK libspdk_trace.so 00:06:51.052 CC lib/thread/thread.o 00:06:51.052 CC lib/thread/iobuf.o 00:06:51.052 CC lib/sock/sock.o 00:06:51.052 CC lib/sock/sock_rpc.o 00:06:51.625 LIB libspdk_sock.a 00:06:51.625 SO libspdk_sock.so.10.0 00:06:51.625 SYMLINK libspdk_sock.so 00:06:51.887 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:51.887 CC lib/nvme/nvme_ctrlr.o 00:06:51.887 CC lib/nvme/nvme_fabric.o 00:06:51.887 CC lib/nvme/nvme_ns_cmd.o 00:06:51.887 CC lib/nvme/nvme_ns.o 00:06:51.887 CC lib/nvme/nvme_pcie_common.o 00:06:51.887 CC lib/nvme/nvme_pcie.o 00:06:51.887 CC lib/nvme/nvme_qpair.o 00:06:51.887 CC lib/nvme/nvme.o 00:06:51.887 CC lib/nvme/nvme_quirks.o 00:06:51.887 CC lib/nvme/nvme_transport.o 00:06:51.887 CC lib/nvme/nvme_discovery.o 00:06:51.887 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:51.887 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:51.887 CC lib/nvme/nvme_tcp.o 00:06:51.887 CC lib/nvme/nvme_opal.o 00:06:51.887 CC lib/nvme/nvme_io_msg.o 00:06:51.887 CC lib/nvme/nvme_poll_group.o 00:06:51.887 CC lib/nvme/nvme_zns.o 00:06:51.887 CC lib/nvme/nvme_stubs.o 00:06:51.887 CC lib/nvme/nvme_auth.o 00:06:51.887 CC lib/nvme/nvme_cuse.o 00:06:51.887 CC lib/nvme/nvme_vfio_user.o 00:06:51.887 CC lib/nvme/nvme_rdma.o 00:06:52.460 LIB libspdk_thread.a 00:06:52.460 SO libspdk_thread.so.11.0 00:06:52.460 SYMLINK libspdk_thread.so 00:06:52.721 CC lib/accel/accel.o 00:06:52.721 CC lib/accel/accel_rpc.o 00:06:52.721 CC lib/accel/accel_sw.o 00:06:52.721 CC lib/blob/blobstore.o 00:06:52.721 CC lib/blob/request.o 00:06:52.721 CC lib/blob/zeroes.o 00:06:52.721 CC lib/blob/blob_bs_dev.o 00:06:52.721 CC lib/fsdev/fsdev.o 00:06:52.721 CC lib/virtio/virtio.o 00:06:52.721 CC lib/fsdev/fsdev_io.o 00:06:52.721 CC lib/virtio/virtio_vhost_user.o 00:06:52.721 CC lib/fsdev/fsdev_rpc.o 00:06:52.721 CC lib/virtio/virtio_vfio_user.o 00:06:52.721 CC lib/init/json_config.o 00:06:52.721 CC lib/virtio/virtio_pci.o 00:06:52.721 CC lib/init/subsystem.o 00:06:52.721 CC lib/vfu_tgt/tgt_endpoint.o 00:06:52.721 CC lib/init/subsystem_rpc.o 00:06:52.721 CC lib/init/rpc.o 00:06:52.982 CC lib/vfu_tgt/tgt_rpc.o 00:06:52.982 LIB libspdk_init.a 00:06:53.242 SO libspdk_init.so.6.0 00:06:53.242 LIB libspdk_virtio.a 00:06:53.242 LIB libspdk_vfu_tgt.a 00:06:53.242 SO libspdk_virtio.so.7.0 00:06:53.242 SYMLINK libspdk_init.so 00:06:53.242 SO libspdk_vfu_tgt.so.3.0 00:06:53.242 SYMLINK libspdk_virtio.so 00:06:53.242 SYMLINK libspdk_vfu_tgt.so 00:06:53.504 LIB libspdk_fsdev.a 00:06:53.504 SO libspdk_fsdev.so.2.0 00:06:53.504 SYMLINK libspdk_fsdev.so 00:06:53.504 CC lib/event/app.o 00:06:53.504 CC lib/event/reactor.o 00:06:53.504 CC lib/event/log_rpc.o 00:06:53.504 CC lib/event/app_rpc.o 00:06:53.504 CC lib/event/scheduler_static.o 00:06:53.765 LIB libspdk_accel.a 00:06:53.765 SO libspdk_accel.so.16.0 00:06:53.765 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:53.765 LIB libspdk_nvme.a 00:06:54.026 SYMLINK libspdk_accel.so 00:06:54.026 LIB libspdk_event.a 00:06:54.026 SO libspdk_nvme.so.15.0 00:06:54.026 SO libspdk_event.so.14.0 00:06:54.026 SYMLINK libspdk_event.so 00:06:54.286 SYMLINK libspdk_nvme.so 00:06:54.286 CC lib/bdev/bdev.o 00:06:54.286 CC lib/bdev/bdev_rpc.o 00:06:54.286 CC lib/bdev/bdev_zone.o 00:06:54.286 CC lib/bdev/part.o 00:06:54.286 CC lib/bdev/scsi_nvme.o 00:06:54.547 LIB libspdk_fuse_dispatcher.a 00:06:54.547 SO libspdk_fuse_dispatcher.so.1.0 00:06:54.547 SYMLINK libspdk_fuse_dispatcher.so 00:06:55.490 LIB libspdk_blob.a 00:06:55.490 SO libspdk_blob.so.11.0 00:06:55.490 SYMLINK libspdk_blob.so 00:06:56.063 CC lib/blobfs/blobfs.o 00:06:56.063 CC lib/blobfs/tree.o 00:06:56.063 CC lib/lvol/lvol.o 00:06:56.635 LIB libspdk_bdev.a 00:06:56.635 SO libspdk_bdev.so.17.0 00:06:56.635 LIB libspdk_blobfs.a 00:06:56.897 SYMLINK libspdk_bdev.so 00:06:56.897 SO libspdk_blobfs.so.10.0 00:06:56.897 LIB libspdk_lvol.a 00:06:56.897 SYMLINK libspdk_blobfs.so 00:06:56.897 SO libspdk_lvol.so.10.0 00:06:56.897 SYMLINK libspdk_lvol.so 00:06:57.159 CC lib/nvmf/ctrlr.o 00:06:57.159 CC lib/nvmf/ctrlr_discovery.o 00:06:57.159 CC lib/nvmf/ctrlr_bdev.o 00:06:57.159 CC lib/nvmf/subsystem.o 00:06:57.159 CC lib/nvmf/nvmf.o 00:06:57.159 CC lib/nvmf/transport.o 00:06:57.159 CC lib/nvmf/nvmf_rpc.o 00:06:57.159 CC lib/scsi/dev.o 00:06:57.159 CC lib/nvmf/tcp.o 00:06:57.159 CC lib/scsi/lun.o 00:06:57.159 CC lib/nvmf/stubs.o 00:06:57.159 CC lib/scsi/port.o 00:06:57.159 CC lib/nvmf/mdns_server.o 00:06:57.159 CC lib/scsi/scsi.o 00:06:57.159 CC lib/ublk/ublk.o 00:06:57.159 CC lib/nvmf/vfio_user.o 00:06:57.159 CC lib/scsi/scsi_bdev.o 00:06:57.159 CC lib/nbd/nbd.o 00:06:57.159 CC lib/nvmf/rdma.o 00:06:57.159 CC lib/ublk/ublk_rpc.o 00:06:57.159 CC lib/scsi/scsi_pr.o 00:06:57.159 CC lib/ftl/ftl_core.o 00:06:57.159 CC lib/nvmf/auth.o 00:06:57.159 CC lib/nbd/nbd_rpc.o 00:06:57.159 CC lib/scsi/scsi_rpc.o 00:06:57.159 CC lib/ftl/ftl_init.o 00:06:57.159 CC lib/scsi/task.o 00:06:57.159 CC lib/ftl/ftl_layout.o 00:06:57.159 CC lib/ftl/ftl_debug.o 00:06:57.159 CC lib/ftl/ftl_io.o 00:06:57.159 CC lib/ftl/ftl_sb.o 00:06:57.159 CC lib/ftl/ftl_l2p.o 00:06:57.159 CC lib/ftl/ftl_l2p_flat.o 00:06:57.159 CC lib/ftl/ftl_nv_cache.o 00:06:57.159 CC lib/ftl/ftl_band.o 00:06:57.159 CC lib/ftl/ftl_band_ops.o 00:06:57.159 CC lib/ftl/ftl_writer.o 00:06:57.159 CC lib/ftl/ftl_rq.o 00:06:57.159 CC lib/ftl/ftl_reloc.o 00:06:57.159 CC lib/ftl/ftl_l2p_cache.o 00:06:57.159 CC lib/ftl/ftl_p2l.o 00:06:57.159 CC lib/ftl/ftl_p2l_log.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:57.159 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:57.159 CC lib/ftl/utils/ftl_conf.o 00:06:57.159 CC lib/ftl/utils/ftl_md.o 00:06:57.159 CC lib/ftl/utils/ftl_mempool.o 00:06:57.159 CC lib/ftl/utils/ftl_bitmap.o 00:06:57.159 CC lib/ftl/utils/ftl_property.o 00:06:57.159 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:57.159 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:57.159 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:57.159 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:57.159 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:57.159 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:57.159 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:57.159 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:57.159 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:57.159 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:57.159 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:57.159 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:57.159 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:57.159 CC lib/ftl/base/ftl_base_dev.o 00:06:57.159 CC lib/ftl/ftl_trace.o 00:06:57.159 CC lib/ftl/base/ftl_base_bdev.o 00:06:57.727 LIB libspdk_nbd.a 00:06:57.727 SO libspdk_nbd.so.7.0 00:06:57.727 LIB libspdk_scsi.a 00:06:57.727 SO libspdk_scsi.so.9.0 00:06:57.988 SYMLINK libspdk_nbd.so 00:06:57.988 SYMLINK libspdk_scsi.so 00:06:57.988 LIB libspdk_ublk.a 00:06:57.988 SO libspdk_ublk.so.3.0 00:06:57.988 SYMLINK libspdk_ublk.so 00:06:58.249 LIB libspdk_ftl.a 00:06:58.249 CC lib/iscsi/conn.o 00:06:58.249 CC lib/iscsi/init_grp.o 00:06:58.249 CC lib/iscsi/iscsi.o 00:06:58.249 CC lib/iscsi/param.o 00:06:58.249 CC lib/iscsi/portal_grp.o 00:06:58.249 CC lib/iscsi/tgt_node.o 00:06:58.249 CC lib/iscsi/iscsi_subsystem.o 00:06:58.249 CC lib/iscsi/iscsi_rpc.o 00:06:58.249 CC lib/iscsi/task.o 00:06:58.249 CC lib/vhost/vhost.o 00:06:58.249 CC lib/vhost/vhost_rpc.o 00:06:58.249 CC lib/vhost/vhost_scsi.o 00:06:58.249 CC lib/vhost/vhost_blk.o 00:06:58.249 CC lib/vhost/rte_vhost_user.o 00:06:58.511 SO libspdk_ftl.so.9.0 00:06:58.772 SYMLINK libspdk_ftl.so 00:06:59.032 LIB libspdk_nvmf.a 00:06:59.292 SO libspdk_nvmf.so.20.0 00:06:59.293 LIB libspdk_vhost.a 00:06:59.293 SO libspdk_vhost.so.8.0 00:06:59.293 SYMLINK libspdk_nvmf.so 00:06:59.552 SYMLINK libspdk_vhost.so 00:06:59.552 LIB libspdk_iscsi.a 00:06:59.552 SO libspdk_iscsi.so.8.0 00:06:59.813 SYMLINK libspdk_iscsi.so 00:07:00.385 CC module/env_dpdk/env_dpdk_rpc.o 00:07:00.385 CC module/vfu_device/vfu_virtio.o 00:07:00.385 CC module/vfu_device/vfu_virtio_blk.o 00:07:00.385 CC module/vfu_device/vfu_virtio_scsi.o 00:07:00.385 CC module/vfu_device/vfu_virtio_rpc.o 00:07:00.385 CC module/vfu_device/vfu_virtio_fs.o 00:07:00.646 CC module/accel/ioat/accel_ioat.o 00:07:00.646 CC module/accel/ioat/accel_ioat_rpc.o 00:07:00.646 LIB libspdk_env_dpdk_rpc.a 00:07:00.646 CC module/fsdev/aio/fsdev_aio.o 00:07:00.646 CC module/accel/dsa/accel_dsa.o 00:07:00.646 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:00.646 CC module/fsdev/aio/linux_aio_mgr.o 00:07:00.646 CC module/accel/iaa/accel_iaa.o 00:07:00.646 CC module/accel/dsa/accel_dsa_rpc.o 00:07:00.646 CC module/accel/iaa/accel_iaa_rpc.o 00:07:00.646 CC module/accel/error/accel_error.o 00:07:00.646 CC module/accel/error/accel_error_rpc.o 00:07:00.646 CC module/sock/posix/posix.o 00:07:00.646 CC module/blob/bdev/blob_bdev.o 00:07:00.646 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:00.646 CC module/keyring/linux/keyring.o 00:07:00.646 CC module/keyring/linux/keyring_rpc.o 00:07:00.646 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:00.646 CC module/keyring/file/keyring.o 00:07:00.646 CC module/keyring/file/keyring_rpc.o 00:07:00.646 CC module/scheduler/gscheduler/gscheduler.o 00:07:00.646 SO libspdk_env_dpdk_rpc.so.6.0 00:07:00.646 SYMLINK libspdk_env_dpdk_rpc.so 00:07:00.646 LIB libspdk_keyring_linux.a 00:07:00.646 LIB libspdk_keyring_file.a 00:07:00.646 LIB libspdk_accel_ioat.a 00:07:00.646 LIB libspdk_scheduler_dpdk_governor.a 00:07:00.646 LIB libspdk_scheduler_gscheduler.a 00:07:00.646 SO libspdk_keyring_linux.so.1.0 00:07:00.906 LIB libspdk_accel_iaa.a 00:07:00.907 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:00.907 SO libspdk_accel_ioat.so.6.0 00:07:00.907 LIB libspdk_accel_error.a 00:07:00.907 SO libspdk_keyring_file.so.2.0 00:07:00.907 LIB libspdk_scheduler_dynamic.a 00:07:00.907 SO libspdk_scheduler_gscheduler.so.4.0 00:07:00.907 SO libspdk_accel_error.so.2.0 00:07:00.907 SO libspdk_accel_iaa.so.3.0 00:07:00.907 SYMLINK libspdk_keyring_linux.so 00:07:00.907 LIB libspdk_accel_dsa.a 00:07:00.907 SO libspdk_scheduler_dynamic.so.4.0 00:07:00.907 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:00.907 SYMLINK libspdk_keyring_file.so 00:07:00.907 SYMLINK libspdk_accel_ioat.so 00:07:00.907 LIB libspdk_blob_bdev.a 00:07:00.907 SYMLINK libspdk_scheduler_gscheduler.so 00:07:00.907 SO libspdk_accel_dsa.so.5.0 00:07:00.907 SYMLINK libspdk_accel_error.so 00:07:00.907 SYMLINK libspdk_accel_iaa.so 00:07:00.907 SO libspdk_blob_bdev.so.11.0 00:07:00.907 SYMLINK libspdk_scheduler_dynamic.so 00:07:00.907 LIB libspdk_vfu_device.a 00:07:00.907 SYMLINK libspdk_accel_dsa.so 00:07:00.907 SYMLINK libspdk_blob_bdev.so 00:07:00.907 SO libspdk_vfu_device.so.3.0 00:07:01.168 SYMLINK libspdk_vfu_device.so 00:07:01.168 LIB libspdk_fsdev_aio.a 00:07:01.168 SO libspdk_fsdev_aio.so.1.0 00:07:01.168 LIB libspdk_sock_posix.a 00:07:01.429 SYMLINK libspdk_fsdev_aio.so 00:07:01.429 SO libspdk_sock_posix.so.6.0 00:07:01.429 SYMLINK libspdk_sock_posix.so 00:07:01.429 CC module/bdev/delay/vbdev_delay.o 00:07:01.429 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:01.429 CC module/blobfs/bdev/blobfs_bdev.o 00:07:01.429 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:01.429 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:01.429 CC module/bdev/nvme/bdev_nvme.o 00:07:01.429 CC module/bdev/lvol/vbdev_lvol.o 00:07:01.429 CC module/bdev/nvme/nvme_rpc.o 00:07:01.429 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:01.429 CC module/bdev/nvme/bdev_mdns_client.o 00:07:01.429 CC module/bdev/split/vbdev_split.o 00:07:01.429 CC module/bdev/nvme/vbdev_opal.o 00:07:01.429 CC module/bdev/split/vbdev_split_rpc.o 00:07:01.429 CC module/bdev/null/bdev_null.o 00:07:01.429 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:01.429 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:01.690 CC module/bdev/null/bdev_null_rpc.o 00:07:01.690 CC module/bdev/malloc/bdev_malloc.o 00:07:01.690 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:01.690 CC module/bdev/gpt/gpt.o 00:07:01.690 CC module/bdev/aio/bdev_aio.o 00:07:01.690 CC module/bdev/aio/bdev_aio_rpc.o 00:07:01.690 CC module/bdev/gpt/vbdev_gpt.o 00:07:01.690 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:01.690 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:01.690 CC module/bdev/iscsi/bdev_iscsi.o 00:07:01.690 CC module/bdev/ftl/bdev_ftl.o 00:07:01.690 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:01.690 CC module/bdev/error/vbdev_error.o 00:07:01.690 CC module/bdev/raid/bdev_raid.o 00:07:01.690 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:01.690 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:01.690 CC module/bdev/error/vbdev_error_rpc.o 00:07:01.690 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:01.690 CC module/bdev/raid/bdev_raid_rpc.o 00:07:01.690 CC module/bdev/raid/bdev_raid_sb.o 00:07:01.690 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:01.690 CC module/bdev/passthru/vbdev_passthru.o 00:07:01.690 CC module/bdev/raid/raid0.o 00:07:01.690 CC module/bdev/raid/raid1.o 00:07:01.690 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:01.690 CC module/bdev/raid/concat.o 00:07:01.949 LIB libspdk_blobfs_bdev.a 00:07:01.949 SO libspdk_blobfs_bdev.so.6.0 00:07:01.949 LIB libspdk_bdev_split.a 00:07:01.949 LIB libspdk_bdev_null.a 00:07:01.949 LIB libspdk_bdev_gpt.a 00:07:01.949 SO libspdk_bdev_split.so.6.0 00:07:01.949 SO libspdk_bdev_null.so.6.0 00:07:01.950 SO libspdk_bdev_gpt.so.6.0 00:07:01.950 SYMLINK libspdk_blobfs_bdev.so 00:07:01.950 LIB libspdk_bdev_error.a 00:07:01.950 LIB libspdk_bdev_ftl.a 00:07:01.950 LIB libspdk_bdev_passthru.a 00:07:01.950 LIB libspdk_bdev_aio.a 00:07:01.950 SO libspdk_bdev_error.so.6.0 00:07:01.950 SO libspdk_bdev_passthru.so.6.0 00:07:01.950 SO libspdk_bdev_ftl.so.6.0 00:07:01.950 SYMLINK libspdk_bdev_split.so 00:07:01.950 SYMLINK libspdk_bdev_gpt.so 00:07:01.950 LIB libspdk_bdev_delay.a 00:07:01.950 LIB libspdk_bdev_zone_block.a 00:07:01.950 SO libspdk_bdev_aio.so.6.0 00:07:01.950 SYMLINK libspdk_bdev_null.so 00:07:01.950 LIB libspdk_bdev_iscsi.a 00:07:01.950 LIB libspdk_bdev_malloc.a 00:07:02.211 SO libspdk_bdev_delay.so.6.0 00:07:02.211 SO libspdk_bdev_zone_block.so.6.0 00:07:02.211 SYMLINK libspdk_bdev_error.so 00:07:02.211 SYMLINK libspdk_bdev_ftl.so 00:07:02.211 SO libspdk_bdev_iscsi.so.6.0 00:07:02.211 SYMLINK libspdk_bdev_passthru.so 00:07:02.211 SO libspdk_bdev_malloc.so.6.0 00:07:02.211 SYMLINK libspdk_bdev_aio.so 00:07:02.211 SYMLINK libspdk_bdev_delay.so 00:07:02.211 SYMLINK libspdk_bdev_zone_block.so 00:07:02.211 LIB libspdk_bdev_lvol.a 00:07:02.211 SYMLINK libspdk_bdev_iscsi.so 00:07:02.211 LIB libspdk_bdev_virtio.a 00:07:02.211 SYMLINK libspdk_bdev_malloc.so 00:07:02.211 SO libspdk_bdev_lvol.so.6.0 00:07:02.211 SO libspdk_bdev_virtio.so.6.0 00:07:02.211 SYMLINK libspdk_bdev_lvol.so 00:07:02.211 SYMLINK libspdk_bdev_virtio.so 00:07:02.472 LIB libspdk_bdev_raid.a 00:07:02.733 SO libspdk_bdev_raid.so.6.0 00:07:02.733 SYMLINK libspdk_bdev_raid.so 00:07:04.120 LIB libspdk_bdev_nvme.a 00:07:04.120 SO libspdk_bdev_nvme.so.7.1 00:07:04.120 SYMLINK libspdk_bdev_nvme.so 00:07:04.692 CC module/event/subsystems/vmd/vmd.o 00:07:04.692 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:04.692 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:04.692 CC module/event/subsystems/iobuf/iobuf.o 00:07:04.692 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:04.692 CC module/event/subsystems/sock/sock.o 00:07:04.692 CC module/event/subsystems/keyring/keyring.o 00:07:04.692 CC module/event/subsystems/scheduler/scheduler.o 00:07:04.692 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:04.692 CC module/event/subsystems/fsdev/fsdev.o 00:07:04.953 LIB libspdk_event_vmd.a 00:07:04.953 LIB libspdk_event_vhost_blk.a 00:07:04.953 LIB libspdk_event_keyring.a 00:07:04.953 LIB libspdk_event_scheduler.a 00:07:04.953 LIB libspdk_event_sock.a 00:07:04.953 LIB libspdk_event_vfu_tgt.a 00:07:04.953 LIB libspdk_event_fsdev.a 00:07:04.953 LIB libspdk_event_iobuf.a 00:07:04.953 SO libspdk_event_keyring.so.1.0 00:07:04.953 SO libspdk_event_vhost_blk.so.3.0 00:07:04.953 SO libspdk_event_vmd.so.6.0 00:07:04.953 SO libspdk_event_scheduler.so.4.0 00:07:04.953 SO libspdk_event_vfu_tgt.so.3.0 00:07:04.953 SO libspdk_event_sock.so.5.0 00:07:04.953 SO libspdk_event_fsdev.so.1.0 00:07:04.953 SO libspdk_event_iobuf.so.3.0 00:07:05.215 SYMLINK libspdk_event_keyring.so 00:07:05.215 SYMLINK libspdk_event_vhost_blk.so 00:07:05.215 SYMLINK libspdk_event_vmd.so 00:07:05.215 SYMLINK libspdk_event_scheduler.so 00:07:05.215 SYMLINK libspdk_event_vfu_tgt.so 00:07:05.215 SYMLINK libspdk_event_sock.so 00:07:05.215 SYMLINK libspdk_event_fsdev.so 00:07:05.215 SYMLINK libspdk_event_iobuf.so 00:07:05.476 CC module/event/subsystems/accel/accel.o 00:07:05.737 LIB libspdk_event_accel.a 00:07:05.737 SO libspdk_event_accel.so.6.0 00:07:05.737 SYMLINK libspdk_event_accel.so 00:07:05.998 CC module/event/subsystems/bdev/bdev.o 00:07:06.260 LIB libspdk_event_bdev.a 00:07:06.260 SO libspdk_event_bdev.so.6.0 00:07:06.260 SYMLINK libspdk_event_bdev.so 00:07:06.832 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:06.832 CC module/event/subsystems/scsi/scsi.o 00:07:06.832 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:06.832 CC module/event/subsystems/ublk/ublk.o 00:07:06.832 CC module/event/subsystems/nbd/nbd.o 00:07:06.832 LIB libspdk_event_ublk.a 00:07:06.832 LIB libspdk_event_nbd.a 00:07:06.832 LIB libspdk_event_scsi.a 00:07:06.832 SO libspdk_event_scsi.so.6.0 00:07:06.832 SO libspdk_event_nbd.so.6.0 00:07:06.832 SO libspdk_event_ublk.so.3.0 00:07:07.094 LIB libspdk_event_nvmf.a 00:07:07.094 SYMLINK libspdk_event_scsi.so 00:07:07.094 SYMLINK libspdk_event_nbd.so 00:07:07.094 SYMLINK libspdk_event_ublk.so 00:07:07.094 SO libspdk_event_nvmf.so.6.0 00:07:07.094 SYMLINK libspdk_event_nvmf.so 00:07:07.355 CC module/event/subsystems/iscsi/iscsi.o 00:07:07.355 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:07.617 LIB libspdk_event_vhost_scsi.a 00:07:07.617 LIB libspdk_event_iscsi.a 00:07:07.617 SO libspdk_event_vhost_scsi.so.3.0 00:07:07.617 SO libspdk_event_iscsi.so.6.0 00:07:07.617 SYMLINK libspdk_event_vhost_scsi.so 00:07:07.617 SYMLINK libspdk_event_iscsi.so 00:07:07.880 SO libspdk.so.6.0 00:07:07.880 SYMLINK libspdk.so 00:07:08.142 CC app/trace_record/trace_record.o 00:07:08.406 CXX app/trace/trace.o 00:07:08.406 CC app/spdk_lspci/spdk_lspci.o 00:07:08.406 CC test/rpc_client/rpc_client_test.o 00:07:08.406 CC app/spdk_nvme_discover/discovery_aer.o 00:07:08.406 CC app/spdk_nvme_identify/identify.o 00:07:08.406 TEST_HEADER include/spdk/accel.h 00:07:08.406 CC app/spdk_nvme_perf/perf.o 00:07:08.406 TEST_HEADER include/spdk/accel_module.h 00:07:08.406 TEST_HEADER include/spdk/assert.h 00:07:08.406 CC app/spdk_top/spdk_top.o 00:07:08.406 TEST_HEADER include/spdk/barrier.h 00:07:08.406 TEST_HEADER include/spdk/base64.h 00:07:08.406 TEST_HEADER include/spdk/bdev.h 00:07:08.406 TEST_HEADER include/spdk/bdev_module.h 00:07:08.406 TEST_HEADER include/spdk/bdev_zone.h 00:07:08.406 TEST_HEADER include/spdk/bit_array.h 00:07:08.406 TEST_HEADER include/spdk/bit_pool.h 00:07:08.406 TEST_HEADER include/spdk/blob_bdev.h 00:07:08.406 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:08.406 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:08.406 TEST_HEADER include/spdk/blobfs.h 00:07:08.406 TEST_HEADER include/spdk/blob.h 00:07:08.406 TEST_HEADER include/spdk/conf.h 00:07:08.406 TEST_HEADER include/spdk/config.h 00:07:08.406 TEST_HEADER include/spdk/cpuset.h 00:07:08.406 TEST_HEADER include/spdk/crc16.h 00:07:08.406 TEST_HEADER include/spdk/crc32.h 00:07:08.406 TEST_HEADER include/spdk/crc64.h 00:07:08.406 TEST_HEADER include/spdk/dif.h 00:07:08.406 TEST_HEADER include/spdk/dma.h 00:07:08.406 TEST_HEADER include/spdk/endian.h 00:07:08.406 TEST_HEADER include/spdk/event.h 00:07:08.406 TEST_HEADER include/spdk/env_dpdk.h 00:07:08.406 TEST_HEADER include/spdk/env.h 00:07:08.406 TEST_HEADER include/spdk/fd.h 00:07:08.406 TEST_HEADER include/spdk/fd_group.h 00:07:08.406 CC app/iscsi_tgt/iscsi_tgt.o 00:07:08.406 TEST_HEADER include/spdk/file.h 00:07:08.406 CC app/nvmf_tgt/nvmf_main.o 00:07:08.406 TEST_HEADER include/spdk/fsdev.h 00:07:08.406 TEST_HEADER include/spdk/fsdev_module.h 00:07:08.406 TEST_HEADER include/spdk/ftl.h 00:07:08.406 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:08.406 TEST_HEADER include/spdk/gpt_spec.h 00:07:08.406 CC app/spdk_dd/spdk_dd.o 00:07:08.406 TEST_HEADER include/spdk/hexlify.h 00:07:08.406 TEST_HEADER include/spdk/idxd.h 00:07:08.406 TEST_HEADER include/spdk/histogram_data.h 00:07:08.406 TEST_HEADER include/spdk/idxd_spec.h 00:07:08.406 TEST_HEADER include/spdk/init.h 00:07:08.406 TEST_HEADER include/spdk/ioat.h 00:07:08.406 TEST_HEADER include/spdk/ioat_spec.h 00:07:08.406 TEST_HEADER include/spdk/iscsi_spec.h 00:07:08.406 TEST_HEADER include/spdk/json.h 00:07:08.406 TEST_HEADER include/spdk/jsonrpc.h 00:07:08.406 TEST_HEADER include/spdk/keyring.h 00:07:08.406 CC app/spdk_tgt/spdk_tgt.o 00:07:08.406 TEST_HEADER include/spdk/likely.h 00:07:08.406 TEST_HEADER include/spdk/keyring_module.h 00:07:08.406 TEST_HEADER include/spdk/log.h 00:07:08.406 TEST_HEADER include/spdk/lvol.h 00:07:08.406 TEST_HEADER include/spdk/md5.h 00:07:08.406 TEST_HEADER include/spdk/mmio.h 00:07:08.406 TEST_HEADER include/spdk/memory.h 00:07:08.406 TEST_HEADER include/spdk/nbd.h 00:07:08.406 TEST_HEADER include/spdk/notify.h 00:07:08.406 TEST_HEADER include/spdk/net.h 00:07:08.406 TEST_HEADER include/spdk/nvme.h 00:07:08.406 TEST_HEADER include/spdk/nvme_intel.h 00:07:08.406 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:08.406 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:08.406 TEST_HEADER include/spdk/nvme_spec.h 00:07:08.406 TEST_HEADER include/spdk/nvme_zns.h 00:07:08.406 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:08.406 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:08.406 TEST_HEADER include/spdk/nvmf.h 00:07:08.406 TEST_HEADER include/spdk/nvmf_spec.h 00:07:08.406 TEST_HEADER include/spdk/nvmf_transport.h 00:07:08.406 TEST_HEADER include/spdk/opal.h 00:07:08.406 TEST_HEADER include/spdk/opal_spec.h 00:07:08.406 TEST_HEADER include/spdk/pci_ids.h 00:07:08.406 TEST_HEADER include/spdk/pipe.h 00:07:08.406 TEST_HEADER include/spdk/queue.h 00:07:08.406 TEST_HEADER include/spdk/reduce.h 00:07:08.406 TEST_HEADER include/spdk/scheduler.h 00:07:08.406 TEST_HEADER include/spdk/rpc.h 00:07:08.406 TEST_HEADER include/spdk/scsi.h 00:07:08.406 TEST_HEADER include/spdk/sock.h 00:07:08.406 TEST_HEADER include/spdk/scsi_spec.h 00:07:08.406 TEST_HEADER include/spdk/stdinc.h 00:07:08.406 TEST_HEADER include/spdk/string.h 00:07:08.406 TEST_HEADER include/spdk/thread.h 00:07:08.406 TEST_HEADER include/spdk/trace.h 00:07:08.406 TEST_HEADER include/spdk/trace_parser.h 00:07:08.406 TEST_HEADER include/spdk/tree.h 00:07:08.406 TEST_HEADER include/spdk/ublk.h 00:07:08.406 TEST_HEADER include/spdk/util.h 00:07:08.406 TEST_HEADER include/spdk/version.h 00:07:08.406 TEST_HEADER include/spdk/uuid.h 00:07:08.406 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:08.406 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:08.406 TEST_HEADER include/spdk/vhost.h 00:07:08.406 TEST_HEADER include/spdk/vmd.h 00:07:08.406 TEST_HEADER include/spdk/xor.h 00:07:08.406 TEST_HEADER include/spdk/zipf.h 00:07:08.406 CXX test/cpp_headers/accel.o 00:07:08.406 CXX test/cpp_headers/accel_module.o 00:07:08.406 CXX test/cpp_headers/barrier.o 00:07:08.406 CXX test/cpp_headers/assert.o 00:07:08.406 CXX test/cpp_headers/base64.o 00:07:08.406 CXX test/cpp_headers/bdev.o 00:07:08.406 CXX test/cpp_headers/bdev_module.o 00:07:08.406 CXX test/cpp_headers/bdev_zone.o 00:07:08.406 CXX test/cpp_headers/bit_pool.o 00:07:08.406 CXX test/cpp_headers/bit_array.o 00:07:08.406 CXX test/cpp_headers/blob_bdev.o 00:07:08.406 CXX test/cpp_headers/blobfs_bdev.o 00:07:08.406 CXX test/cpp_headers/blobfs.o 00:07:08.406 CXX test/cpp_headers/blob.o 00:07:08.406 CXX test/cpp_headers/conf.o 00:07:08.406 CXX test/cpp_headers/config.o 00:07:08.406 CXX test/cpp_headers/cpuset.o 00:07:08.406 CXX test/cpp_headers/crc64.o 00:07:08.406 CXX test/cpp_headers/crc32.o 00:07:08.406 CXX test/cpp_headers/crc16.o 00:07:08.406 CXX test/cpp_headers/dif.o 00:07:08.406 CXX test/cpp_headers/dma.o 00:07:08.406 CXX test/cpp_headers/endian.o 00:07:08.406 CXX test/cpp_headers/env_dpdk.o 00:07:08.406 CXX test/cpp_headers/event.o 00:07:08.406 CXX test/cpp_headers/env.o 00:07:08.406 CXX test/cpp_headers/fd_group.o 00:07:08.406 CXX test/cpp_headers/fd.o 00:07:08.406 CXX test/cpp_headers/file.o 00:07:08.406 CXX test/cpp_headers/ftl.o 00:07:08.406 CXX test/cpp_headers/fsdev.o 00:07:08.406 CXX test/cpp_headers/fsdev_module.o 00:07:08.406 CXX test/cpp_headers/fuse_dispatcher.o 00:07:08.406 CXX test/cpp_headers/gpt_spec.o 00:07:08.406 CXX test/cpp_headers/histogram_data.o 00:07:08.406 CXX test/cpp_headers/hexlify.o 00:07:08.406 CXX test/cpp_headers/idxd.o 00:07:08.406 CXX test/cpp_headers/idxd_spec.o 00:07:08.406 CXX test/cpp_headers/ioat.o 00:07:08.406 CXX test/cpp_headers/init.o 00:07:08.406 CXX test/cpp_headers/ioat_spec.o 00:07:08.406 CXX test/cpp_headers/iscsi_spec.o 00:07:08.406 CXX test/cpp_headers/jsonrpc.o 00:07:08.406 CXX test/cpp_headers/json.o 00:07:08.406 CXX test/cpp_headers/keyring_module.o 00:07:08.406 CC examples/util/zipf/zipf.o 00:07:08.406 CXX test/cpp_headers/keyring.o 00:07:08.677 CXX test/cpp_headers/likely.o 00:07:08.677 CXX test/cpp_headers/lvol.o 00:07:08.677 LINK spdk_lspci 00:07:08.677 CXX test/cpp_headers/log.o 00:07:08.677 CXX test/cpp_headers/memory.o 00:07:08.677 CXX test/cpp_headers/md5.o 00:07:08.677 CXX test/cpp_headers/nbd.o 00:07:08.677 CXX test/cpp_headers/mmio.o 00:07:08.677 CXX test/cpp_headers/net.o 00:07:08.677 CXX test/cpp_headers/nvme_intel.o 00:07:08.677 CC examples/ioat/perf/perf.o 00:07:08.677 CC test/thread/poller_perf/poller_perf.o 00:07:08.677 CXX test/cpp_headers/nvme.o 00:07:08.677 CXX test/cpp_headers/notify.o 00:07:08.677 CXX test/cpp_headers/nvme_ocssd.o 00:07:08.677 CXX test/cpp_headers/nvme_spec.o 00:07:08.677 CXX test/cpp_headers/nvmf_cmd.o 00:07:08.677 CC examples/ioat/verify/verify.o 00:07:08.677 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:08.677 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:08.677 CXX test/cpp_headers/nvme_zns.o 00:07:08.677 CC test/env/vtophys/vtophys.o 00:07:08.677 CXX test/cpp_headers/nvmf_transport.o 00:07:08.677 CXX test/cpp_headers/nvmf.o 00:07:08.677 CXX test/cpp_headers/opal.o 00:07:08.677 CXX test/cpp_headers/opal_spec.o 00:07:08.677 CXX test/cpp_headers/nvmf_spec.o 00:07:08.677 CXX test/cpp_headers/pci_ids.o 00:07:08.677 CXX test/cpp_headers/pipe.o 00:07:08.677 CXX test/cpp_headers/scsi.o 00:07:08.677 CXX test/cpp_headers/queue.o 00:07:08.677 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:08.677 CXX test/cpp_headers/reduce.o 00:07:08.677 CXX test/cpp_headers/rpc.o 00:07:08.677 CXX test/cpp_headers/scheduler.o 00:07:08.677 CC test/env/pci/pci_ut.o 00:07:08.677 CXX test/cpp_headers/scsi_spec.o 00:07:08.677 CXX test/cpp_headers/string.o 00:07:08.677 CC app/fio/nvme/fio_plugin.o 00:07:08.677 CXX test/cpp_headers/sock.o 00:07:08.677 CXX test/cpp_headers/stdinc.o 00:07:08.677 CXX test/cpp_headers/trace.o 00:07:08.677 CC test/app/histogram_perf/histogram_perf.o 00:07:08.677 CXX test/cpp_headers/thread.o 00:07:08.677 CC test/app/bdev_svc/bdev_svc.o 00:07:08.677 CXX test/cpp_headers/trace_parser.o 00:07:08.677 CXX test/cpp_headers/tree.o 00:07:08.677 CXX test/cpp_headers/ublk.o 00:07:08.677 CC test/env/memory/memory_ut.o 00:07:08.677 CXX test/cpp_headers/util.o 00:07:08.677 CXX test/cpp_headers/vfio_user_pci.o 00:07:08.677 CXX test/cpp_headers/uuid.o 00:07:08.677 CXX test/cpp_headers/vfio_user_spec.o 00:07:08.677 LINK rpc_client_test 00:07:08.677 CXX test/cpp_headers/vmd.o 00:07:08.677 CXX test/cpp_headers/version.o 00:07:08.677 CXX test/cpp_headers/zipf.o 00:07:08.677 CXX test/cpp_headers/vhost.o 00:07:08.677 CXX test/cpp_headers/xor.o 00:07:08.677 CC test/dma/test_dma/test_dma.o 00:07:08.677 CC test/app/jsoncat/jsoncat.o 00:07:08.677 LINK spdk_nvme_discover 00:07:08.677 CC app/fio/bdev/fio_plugin.o 00:07:08.677 CC test/app/stub/stub.o 00:07:08.951 LINK interrupt_tgt 00:07:08.951 LINK nvmf_tgt 00:07:08.951 LINK iscsi_tgt 00:07:09.231 LINK spdk_trace_record 00:07:09.231 LINK spdk_tgt 00:07:09.231 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:09.498 CC test/env/mem_callbacks/mem_callbacks.o 00:07:09.498 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:09.498 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:09.498 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:09.498 LINK spdk_dd 00:07:09.498 LINK spdk_trace 00:07:09.498 LINK poller_perf 00:07:09.498 LINK verify 00:07:09.760 LINK jsoncat 00:07:09.760 LINK histogram_perf 00:07:09.760 LINK env_dpdk_post_init 00:07:09.760 LINK pci_ut 00:07:09.760 LINK bdev_svc 00:07:09.760 LINK ioat_perf 00:07:09.760 LINK zipf 00:07:09.760 LINK spdk_top 00:07:10.021 LINK vtophys 00:07:10.021 LINK stub 00:07:10.021 LINK test_dma 00:07:10.021 LINK nvme_fuzz 00:07:10.021 CC app/vhost/vhost.o 00:07:10.282 LINK spdk_nvme 00:07:10.282 CC test/event/reactor_perf/reactor_perf.o 00:07:10.282 CC test/event/reactor/reactor.o 00:07:10.282 CC test/event/event_perf/event_perf.o 00:07:10.282 CC test/event/app_repeat/app_repeat.o 00:07:10.282 CC test/event/scheduler/scheduler.o 00:07:10.282 LINK vhost_fuzz 00:07:10.282 LINK mem_callbacks 00:07:10.282 LINK spdk_bdev 00:07:10.282 LINK vhost 00:07:10.282 CC examples/idxd/perf/perf.o 00:07:10.282 LINK event_perf 00:07:10.282 CC examples/vmd/lsvmd/lsvmd.o 00:07:10.282 CC examples/vmd/led/led.o 00:07:10.282 CC examples/sock/hello_world/hello_sock.o 00:07:10.282 LINK reactor_perf 00:07:10.282 LINK reactor 00:07:10.543 LINK app_repeat 00:07:10.543 CC examples/thread/thread/thread_ex.o 00:07:10.543 LINK spdk_nvme_perf 00:07:10.543 LINK spdk_nvme_identify 00:07:10.543 LINK scheduler 00:07:10.543 LINK lsvmd 00:07:10.543 LINK led 00:07:10.804 CC test/nvme/reserve/reserve.o 00:07:10.804 CC test/nvme/err_injection/err_injection.o 00:07:10.804 CC test/nvme/reset/reset.o 00:07:10.804 CC test/nvme/overhead/overhead.o 00:07:10.804 CC test/nvme/e2edp/nvme_dp.o 00:07:10.804 LINK hello_sock 00:07:10.804 CC test/nvme/aer/aer.o 00:07:10.804 CC test/nvme/connect_stress/connect_stress.o 00:07:10.804 CC test/nvme/sgl/sgl.o 00:07:10.804 CC test/nvme/fused_ordering/fused_ordering.o 00:07:10.804 CC test/nvme/startup/startup.o 00:07:10.804 CC test/nvme/simple_copy/simple_copy.o 00:07:10.804 CC test/nvme/boot_partition/boot_partition.o 00:07:10.804 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:10.804 CC test/nvme/compliance/nvme_compliance.o 00:07:10.804 CC test/nvme/fdp/fdp.o 00:07:10.804 CC test/nvme/cuse/cuse.o 00:07:10.804 CC test/accel/dif/dif.o 00:07:10.804 CC test/blobfs/mkfs/mkfs.o 00:07:10.804 LINK idxd_perf 00:07:10.804 LINK memory_ut 00:07:10.804 LINK thread 00:07:10.804 CC test/lvol/esnap/esnap.o 00:07:10.804 LINK err_injection 00:07:10.804 LINK reserve 00:07:11.065 LINK doorbell_aers 00:07:11.065 LINK boot_partition 00:07:11.065 LINK connect_stress 00:07:11.065 LINK startup 00:07:11.065 LINK fused_ordering 00:07:11.065 LINK reset 00:07:11.065 LINK simple_copy 00:07:11.065 LINK sgl 00:07:11.065 LINK aer 00:07:11.065 LINK nvme_dp 00:07:11.065 LINK mkfs 00:07:11.065 LINK overhead 00:07:11.065 LINK nvme_compliance 00:07:11.065 LINK fdp 00:07:11.326 CC examples/nvme/hello_world/hello_world.o 00:07:11.326 CC examples/nvme/abort/abort.o 00:07:11.326 CC examples/nvme/reconnect/reconnect.o 00:07:11.326 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:11.326 CC examples/nvme/arbitration/arbitration.o 00:07:11.326 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:11.326 CC examples/nvme/hotplug/hotplug.o 00:07:11.326 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:11.326 LINK iscsi_fuzz 00:07:11.326 LINK dif 00:07:11.326 CC examples/accel/perf/accel_perf.o 00:07:11.326 CC examples/blob/cli/blobcli.o 00:07:11.326 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:11.326 LINK pmr_persistence 00:07:11.326 CC examples/blob/hello_world/hello_blob.o 00:07:11.326 LINK cmb_copy 00:07:11.587 LINK hello_world 00:07:11.587 LINK hotplug 00:07:11.587 LINK reconnect 00:07:11.587 LINK arbitration 00:07:11.587 LINK abort 00:07:11.587 LINK nvme_manage 00:07:11.848 LINK hello_blob 00:07:11.848 LINK hello_fsdev 00:07:11.848 LINK accel_perf 00:07:11.848 LINK blobcli 00:07:11.848 LINK cuse 00:07:12.159 CC test/bdev/bdevio/bdevio.o 00:07:12.480 LINK bdevio 00:07:12.480 CC examples/bdev/bdevperf/bdevperf.o 00:07:12.480 CC examples/bdev/hello_world/hello_bdev.o 00:07:12.762 LINK hello_bdev 00:07:13.332 LINK bdevperf 00:07:13.903 CC examples/nvmf/nvmf/nvmf.o 00:07:14.163 LINK nvmf 00:07:15.547 LINK esnap 00:07:15.547 00:07:15.547 real 0m55.772s 00:07:15.547 user 8m8.240s 00:07:15.547 sys 5m59.122s 00:07:15.547 06:18:35 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:15.547 06:18:35 make -- common/autotest_common.sh@10 -- $ set +x 00:07:15.547 ************************************ 00:07:15.547 END TEST make 00:07:15.547 ************************************ 00:07:15.809 06:18:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:15.809 06:18:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:15.809 06:18:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:15.809 06:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:15.809 06:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:15.809 06:18:35 -- pm/common@44 -- $ pid=2384488 00:07:15.809 06:18:35 -- pm/common@50 -- $ kill -TERM 2384488 00:07:15.809 06:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:15.809 06:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:15.809 06:18:35 -- pm/common@44 -- $ pid=2384489 00:07:15.809 06:18:35 -- pm/common@50 -- $ kill -TERM 2384489 00:07:15.809 06:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:15.809 06:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:15.809 06:18:35 -- pm/common@44 -- $ pid=2384491 00:07:15.809 06:18:35 -- pm/common@50 -- $ kill -TERM 2384491 00:07:15.809 06:18:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:15.809 06:18:35 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:15.809 06:18:35 -- pm/common@44 -- $ pid=2384515 00:07:15.809 06:18:35 -- pm/common@50 -- $ sudo -E kill -TERM 2384515 00:07:15.809 06:18:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:15.809 06:18:35 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:15.809 06:18:35 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:15.809 06:18:35 -- common/autotest_common.sh@1691 -- # lcov --version 00:07:15.809 06:18:35 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:16.071 06:18:35 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:16.071 06:18:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.071 06:18:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.071 06:18:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.071 06:18:35 -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.071 06:18:35 -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.071 06:18:35 -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.071 06:18:35 -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.071 06:18:35 -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.071 06:18:35 -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.071 06:18:35 -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.071 06:18:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.071 06:18:35 -- scripts/common.sh@344 -- # case "$op" in 00:07:16.071 06:18:35 -- scripts/common.sh@345 -- # : 1 00:07:16.071 06:18:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.071 06:18:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.071 06:18:35 -- scripts/common.sh@365 -- # decimal 1 00:07:16.071 06:18:35 -- scripts/common.sh@353 -- # local d=1 00:07:16.071 06:18:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.071 06:18:35 -- scripts/common.sh@355 -- # echo 1 00:07:16.071 06:18:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.071 06:18:35 -- scripts/common.sh@366 -- # decimal 2 00:07:16.071 06:18:35 -- scripts/common.sh@353 -- # local d=2 00:07:16.071 06:18:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.071 06:18:35 -- scripts/common.sh@355 -- # echo 2 00:07:16.071 06:18:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.071 06:18:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.071 06:18:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.071 06:18:35 -- scripts/common.sh@368 -- # return 0 00:07:16.071 06:18:35 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.071 06:18:35 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:16.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.071 --rc genhtml_branch_coverage=1 00:07:16.071 --rc genhtml_function_coverage=1 00:07:16.071 --rc genhtml_legend=1 00:07:16.071 --rc geninfo_all_blocks=1 00:07:16.071 --rc geninfo_unexecuted_blocks=1 00:07:16.071 00:07:16.071 ' 00:07:16.071 06:18:35 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:16.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.071 --rc genhtml_branch_coverage=1 00:07:16.071 --rc genhtml_function_coverage=1 00:07:16.071 --rc genhtml_legend=1 00:07:16.071 --rc geninfo_all_blocks=1 00:07:16.071 --rc geninfo_unexecuted_blocks=1 00:07:16.071 00:07:16.071 ' 00:07:16.071 06:18:35 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:16.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.071 --rc genhtml_branch_coverage=1 00:07:16.071 --rc genhtml_function_coverage=1 00:07:16.071 --rc genhtml_legend=1 00:07:16.071 --rc geninfo_all_blocks=1 00:07:16.071 --rc geninfo_unexecuted_blocks=1 00:07:16.071 00:07:16.071 ' 00:07:16.072 06:18:35 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:16.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.072 --rc genhtml_branch_coverage=1 00:07:16.072 --rc genhtml_function_coverage=1 00:07:16.072 --rc genhtml_legend=1 00:07:16.072 --rc geninfo_all_blocks=1 00:07:16.072 --rc geninfo_unexecuted_blocks=1 00:07:16.072 00:07:16.072 ' 00:07:16.072 06:18:35 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.072 06:18:35 -- nvmf/common.sh@7 -- # uname -s 00:07:16.072 06:18:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.072 06:18:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.072 06:18:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.072 06:18:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.072 06:18:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.072 06:18:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.072 06:18:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.072 06:18:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.072 06:18:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.072 06:18:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.072 06:18:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:16.072 06:18:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:16.072 06:18:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.072 06:18:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.072 06:18:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.072 06:18:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.072 06:18:35 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.072 06:18:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.072 06:18:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.072 06:18:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.072 06:18:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.072 06:18:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.072 06:18:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.072 06:18:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.072 06:18:35 -- paths/export.sh@5 -- # export PATH 00:07:16.072 06:18:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.072 06:18:35 -- nvmf/common.sh@51 -- # : 0 00:07:16.072 06:18:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.072 06:18:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.072 06:18:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.072 06:18:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.072 06:18:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.072 06:18:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.072 06:18:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.072 06:18:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.072 06:18:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.072 06:18:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:16.072 06:18:35 -- spdk/autotest.sh@32 -- # uname -s 00:07:16.072 06:18:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:16.072 06:18:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:16.072 06:18:35 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:16.072 06:18:35 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:16.072 06:18:35 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:16.072 06:18:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:16.072 06:18:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:16.072 06:18:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:16.072 06:18:35 -- spdk/autotest.sh@48 -- # udevadm_pid=2450445 00:07:16.072 06:18:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:16.072 06:18:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:16.072 06:18:35 -- pm/common@17 -- # local monitor 00:07:16.072 06:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:16.072 06:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:16.072 06:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:16.072 06:18:35 -- pm/common@21 -- # date +%s 00:07:16.072 06:18:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:16.072 06:18:35 -- pm/common@21 -- # date +%s 00:07:16.072 06:18:35 -- pm/common@25 -- # sleep 1 00:07:16.072 06:18:35 -- pm/common@21 -- # date +%s 00:07:16.072 06:18:35 -- pm/common@21 -- # date +%s 00:07:16.072 06:18:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079915 00:07:16.072 06:18:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079915 00:07:16.072 06:18:35 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079915 00:07:16.072 06:18:35 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079915 00:07:16.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079915_collect-vmstat.pm.log 00:07:16.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079915_collect-cpu-load.pm.log 00:07:16.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079915_collect-cpu-temp.pm.log 00:07:16.072 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079915_collect-bmc-pm.bmc.pm.log 00:07:17.015 06:18:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:17.015 06:18:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:17.015 06:18:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.015 06:18:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.015 06:18:36 -- spdk/autotest.sh@59 -- # create_test_list 00:07:17.015 06:18:36 -- common/autotest_common.sh@750 -- # xtrace_disable 00:07:17.015 06:18:36 -- common/autotest_common.sh@10 -- # set +x 00:07:17.015 06:18:36 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:17.015 06:18:36 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.015 06:18:36 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.015 06:18:36 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:17.015 06:18:36 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:17.015 06:18:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:17.015 06:18:36 -- common/autotest_common.sh@1455 -- # uname 00:07:17.015 06:18:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:17.015 06:18:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:17.015 06:18:36 -- common/autotest_common.sh@1475 -- # uname 00:07:17.015 06:18:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:17.015 06:18:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:17.015 06:18:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:17.276 lcov: LCOV version 1.15 00:07:17.276 06:18:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:43.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:43.863 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:48.071 06:19:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:48.071 06:19:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.071 06:19:07 -- common/autotest_common.sh@10 -- # set +x 00:07:48.071 06:19:07 -- spdk/autotest.sh@78 -- # rm -f 00:07:48.071 06:19:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:51.374 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:65:00.0 (144d a80a): Already using the nvme driver 00:07:51.374 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:07:51.374 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:07:51.635 06:19:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:51.635 06:19:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:51.635 06:19:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:51.635 06:19:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:51.635 06:19:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:51.635 06:19:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:51.635 06:19:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:51.635 06:19:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:51.635 06:19:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:51.635 06:19:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:51.635 06:19:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:51.635 06:19:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:51.635 06:19:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:51.635 06:19:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:51.635 06:19:11 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:51.635 No valid GPT data, bailing 00:07:51.635 06:19:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:51.635 06:19:11 -- scripts/common.sh@394 -- # pt= 00:07:51.635 06:19:11 -- scripts/common.sh@395 -- # return 1 00:07:51.635 06:19:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:51.635 1+0 records in 00:07:51.635 1+0 records out 00:07:51.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00189672 s, 553 MB/s 00:07:51.635 06:19:11 -- spdk/autotest.sh@105 -- # sync 00:07:51.635 06:19:11 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:51.635 06:19:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:51.635 06:19:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:01.635 06:19:20 -- spdk/autotest.sh@111 -- # uname -s 00:08:01.635 06:19:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:01.635 06:19:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:01.635 06:19:20 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:04.184 Hugepages 00:08:04.184 node hugesize free / total 00:08:04.184 node0 1048576kB 0 / 0 00:08:04.184 node0 2048kB 0 / 0 00:08:04.184 node1 1048576kB 0 / 0 00:08:04.184 node1 2048kB 0 / 0 00:08:04.184 00:08:04.184 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:04.184 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:08:04.184 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:08:04.184 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:08:04.184 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:08:04.184 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:08:04.184 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:08:04.184 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:08:04.184 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:08:04.184 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:08:04.184 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:08:04.184 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:08:04.184 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:08:04.184 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:08:04.184 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:08:04.184 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:08:04.184 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:08:04.184 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:08:04.184 06:19:23 -- spdk/autotest.sh@117 -- # uname -s 00:08:04.184 06:19:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:04.184 06:19:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:04.184 06:19:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:07.489 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:07.489 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:07.489 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:07.489 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:07.489 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:07.489 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:07.489 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:07.489 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:07.749 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:09.662 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:09.923 06:19:29 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:10.868 06:19:30 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:10.868 06:19:30 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:10.868 06:19:30 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:10.868 06:19:30 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:10.868 06:19:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:10.868 06:19:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:10.868 06:19:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:10.868 06:19:30 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:10.868 06:19:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:10.868 06:19:30 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:10.868 06:19:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:08:10.868 06:19:30 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:14.173 Waiting for block devices as requested 00:08:14.434 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:14.434 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:14.434 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:14.695 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:14.695 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:14.695 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:14.956 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:14.956 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:14.956 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:08:15.217 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:15.217 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:15.479 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:15.479 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:15.479 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:15.740 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:15.740 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:15.740 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:16.311 06:19:35 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:16.311 06:19:35 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:08:16.311 06:19:35 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:08:16.311 06:19:35 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:16.311 06:19:35 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:16.311 06:19:35 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:16.311 06:19:35 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:08:16.311 06:19:35 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:16.311 06:19:35 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:16.311 06:19:35 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:16.311 06:19:35 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:16.311 06:19:35 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:16.311 06:19:35 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:16.311 06:19:35 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:16.311 06:19:35 -- common/autotest_common.sh@1541 -- # continue 00:08:16.311 06:19:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:16.311 06:19:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.311 06:19:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.311 06:19:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:16.311 06:19:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.311 06:19:36 -- common/autotest_common.sh@10 -- # set +x 00:08:16.311 06:19:36 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:19.615 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:19.615 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:19.615 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:19.615 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:19.876 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:20.450 06:19:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:20.450 06:19:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.450 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:08:20.450 06:19:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:20.450 06:19:40 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:20.450 06:19:40 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:20.450 06:19:40 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:20.450 06:19:40 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:20.450 06:19:40 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:20.450 06:19:40 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:20.450 06:19:40 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:20.450 06:19:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:20.450 06:19:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:20.450 06:19:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:20.450 06:19:40 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:20.450 06:19:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:20.450 06:19:40 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:20.450 06:19:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:08:20.450 06:19:40 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:20.450 06:19:40 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:08:20.450 06:19:40 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:08:20.450 06:19:40 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:08:20.450 06:19:40 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:08:20.450 06:19:40 -- common/autotest_common.sh@1570 -- # return 0 00:08:20.450 06:19:40 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:08:20.450 06:19:40 -- common/autotest_common.sh@1578 -- # return 0 00:08:20.450 06:19:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:20.450 06:19:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:20.450 06:19:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:20.450 06:19:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:20.450 06:19:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:20.450 06:19:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.450 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:08:20.450 06:19:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:20.450 06:19:40 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:20.450 06:19:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.450 06:19:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.450 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:08:20.450 ************************************ 00:08:20.450 START TEST env 00:08:20.450 ************************************ 00:08:20.450 06:19:40 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:20.712 * Looking for test storage... 00:08:20.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:20.712 06:19:40 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:20.712 06:19:40 env -- common/autotest_common.sh@1691 -- # lcov --version 00:08:20.712 06:19:40 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:20.712 06:19:40 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:20.712 06:19:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.712 06:19:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.712 06:19:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.712 06:19:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.712 06:19:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.712 06:19:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.712 06:19:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.712 06:19:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.712 06:19:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.712 06:19:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.712 06:19:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.712 06:19:40 env -- scripts/common.sh@344 -- # case "$op" in 00:08:20.712 06:19:40 env -- scripts/common.sh@345 -- # : 1 00:08:20.712 06:19:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.712 06:19:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.712 06:19:40 env -- scripts/common.sh@365 -- # decimal 1 00:08:20.712 06:19:40 env -- scripts/common.sh@353 -- # local d=1 00:08:20.712 06:19:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.712 06:19:40 env -- scripts/common.sh@355 -- # echo 1 00:08:20.712 06:19:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.712 06:19:40 env -- scripts/common.sh@366 -- # decimal 2 00:08:20.712 06:19:40 env -- scripts/common.sh@353 -- # local d=2 00:08:20.712 06:19:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.712 06:19:40 env -- scripts/common.sh@355 -- # echo 2 00:08:20.712 06:19:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.712 06:19:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.712 06:19:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.713 06:19:40 env -- scripts/common.sh@368 -- # return 0 00:08:20.713 06:19:40 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.713 06:19:40 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:20.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.713 --rc genhtml_branch_coverage=1 00:08:20.713 --rc genhtml_function_coverage=1 00:08:20.713 --rc genhtml_legend=1 00:08:20.713 --rc geninfo_all_blocks=1 00:08:20.713 --rc geninfo_unexecuted_blocks=1 00:08:20.713 00:08:20.713 ' 00:08:20.713 06:19:40 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:20.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.713 --rc genhtml_branch_coverage=1 00:08:20.713 --rc genhtml_function_coverage=1 00:08:20.713 --rc genhtml_legend=1 00:08:20.713 --rc geninfo_all_blocks=1 00:08:20.713 --rc geninfo_unexecuted_blocks=1 00:08:20.713 00:08:20.713 ' 00:08:20.713 06:19:40 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:20.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.713 --rc genhtml_branch_coverage=1 00:08:20.713 --rc genhtml_function_coverage=1 00:08:20.713 --rc genhtml_legend=1 00:08:20.713 --rc geninfo_all_blocks=1 00:08:20.713 --rc geninfo_unexecuted_blocks=1 00:08:20.713 00:08:20.713 ' 00:08:20.713 06:19:40 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:20.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.713 --rc genhtml_branch_coverage=1 00:08:20.713 --rc genhtml_function_coverage=1 00:08:20.713 --rc genhtml_legend=1 00:08:20.713 --rc geninfo_all_blocks=1 00:08:20.713 --rc geninfo_unexecuted_blocks=1 00:08:20.713 00:08:20.713 ' 00:08:20.713 06:19:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:20.713 06:19:40 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.713 06:19:40 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.713 06:19:40 env -- common/autotest_common.sh@10 -- # set +x 00:08:20.713 ************************************ 00:08:20.713 START TEST env_memory 00:08:20.713 ************************************ 00:08:20.713 06:19:40 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:20.713 00:08:20.713 00:08:20.713 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.713 http://cunit.sourceforge.net/ 00:08:20.713 00:08:20.713 00:08:20.713 Suite: memory 00:08:20.713 Test: alloc and free memory map ...[2024-11-20 06:19:40.597655] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:20.713 passed 00:08:20.713 Test: mem map translation ...[2024-11-20 06:19:40.623321] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:20.713 [2024-11-20 06:19:40.623346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:20.713 [2024-11-20 06:19:40.623393] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:20.713 [2024-11-20 06:19:40.623400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:20.976 passed 00:08:20.976 Test: mem map registration ...[2024-11-20 06:19:40.678669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:20.976 [2024-11-20 06:19:40.678695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:20.976 passed 00:08:20.976 Test: mem map adjacent registrations ...passed 00:08:20.976 00:08:20.976 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.976 suites 1 1 n/a 0 0 00:08:20.976 tests 4 4 4 0 0 00:08:20.976 asserts 152 152 152 0 n/a 00:08:20.976 00:08:20.976 Elapsed time = 0.191 seconds 00:08:20.976 00:08:20.976 real 0m0.206s 00:08:20.976 user 0m0.192s 00:08:20.976 sys 0m0.013s 00:08:20.976 06:19:40 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.976 06:19:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:20.976 ************************************ 00:08:20.976 END TEST env_memory 00:08:20.976 ************************************ 00:08:20.976 06:19:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:20.976 06:19:40 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:20.976 06:19:40 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.976 06:19:40 env -- common/autotest_common.sh@10 -- # set +x 00:08:20.976 ************************************ 00:08:20.976 START TEST env_vtophys 00:08:20.976 ************************************ 00:08:20.976 06:19:40 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:20.976 EAL: lib.eal log level changed from notice to debug 00:08:20.976 EAL: Detected lcore 0 as core 0 on socket 0 00:08:20.976 EAL: Detected lcore 1 as core 1 on socket 0 00:08:20.976 EAL: Detected lcore 2 as core 2 on socket 0 00:08:20.976 EAL: Detected lcore 3 as core 3 on socket 0 00:08:20.976 EAL: Detected lcore 4 as core 4 on socket 0 00:08:20.976 EAL: Detected lcore 5 as core 5 on socket 0 00:08:20.976 EAL: Detected lcore 6 as core 6 on socket 0 00:08:20.976 EAL: Detected lcore 7 as core 7 on socket 0 00:08:20.976 EAL: Detected lcore 8 as core 8 on socket 0 00:08:20.976 EAL: Detected lcore 9 as core 9 on socket 0 00:08:20.976 EAL: Detected lcore 10 as core 10 on socket 0 00:08:20.976 EAL: Detected lcore 11 as core 11 on socket 0 00:08:20.976 EAL: Detected lcore 12 as core 12 on socket 0 00:08:20.976 EAL: Detected lcore 13 as core 13 on socket 0 00:08:20.976 EAL: Detected lcore 14 as core 14 on socket 0 00:08:20.976 EAL: Detected lcore 15 as core 15 on socket 0 00:08:20.976 EAL: Detected lcore 16 as core 16 on socket 0 00:08:20.976 EAL: Detected lcore 17 as core 17 on socket 0 00:08:20.976 EAL: Detected lcore 18 as core 18 on socket 0 00:08:20.976 EAL: Detected lcore 19 as core 19 on socket 0 00:08:20.976 EAL: Detected lcore 20 as core 20 on socket 0 00:08:20.976 EAL: Detected lcore 21 as core 21 on socket 0 00:08:20.976 EAL: Detected lcore 22 as core 22 on socket 0 00:08:20.976 EAL: Detected lcore 23 as core 23 on socket 0 00:08:20.976 EAL: Detected lcore 24 as core 24 on socket 0 00:08:20.976 EAL: Detected lcore 25 as core 25 on socket 0 00:08:20.976 EAL: Detected lcore 26 as core 26 on socket 0 00:08:20.976 EAL: Detected lcore 27 as core 27 on socket 0 00:08:20.976 EAL: Detected lcore 28 as core 28 on socket 0 00:08:20.976 EAL: Detected lcore 29 as core 29 on socket 0 00:08:20.976 EAL: Detected lcore 30 as core 30 on socket 0 00:08:20.976 EAL: Detected lcore 31 as core 31 on socket 0 00:08:20.976 EAL: Detected lcore 32 as core 32 on socket 0 00:08:20.976 EAL: Detected lcore 33 as core 33 on socket 0 00:08:20.976 EAL: Detected lcore 34 as core 34 on socket 0 00:08:20.976 EAL: Detected lcore 35 as core 35 on socket 0 00:08:20.976 EAL: Detected lcore 36 as core 0 on socket 1 00:08:20.976 EAL: Detected lcore 37 as core 1 on socket 1 00:08:20.976 EAL: Detected lcore 38 as core 2 on socket 1 00:08:20.976 EAL: Detected lcore 39 as core 3 on socket 1 00:08:20.976 EAL: Detected lcore 40 as core 4 on socket 1 00:08:20.976 EAL: Detected lcore 41 as core 5 on socket 1 00:08:20.976 EAL: Detected lcore 42 as core 6 on socket 1 00:08:20.976 EAL: Detected lcore 43 as core 7 on socket 1 00:08:20.976 EAL: Detected lcore 44 as core 8 on socket 1 00:08:20.976 EAL: Detected lcore 45 as core 9 on socket 1 00:08:20.976 EAL: Detected lcore 46 as core 10 on socket 1 00:08:20.976 EAL: Detected lcore 47 as core 11 on socket 1 00:08:20.976 EAL: Detected lcore 48 as core 12 on socket 1 00:08:20.976 EAL: Detected lcore 49 as core 13 on socket 1 00:08:20.976 EAL: Detected lcore 50 as core 14 on socket 1 00:08:20.976 EAL: Detected lcore 51 as core 15 on socket 1 00:08:20.976 EAL: Detected lcore 52 as core 16 on socket 1 00:08:20.976 EAL: Detected lcore 53 as core 17 on socket 1 00:08:20.976 EAL: Detected lcore 54 as core 18 on socket 1 00:08:20.976 EAL: Detected lcore 55 as core 19 on socket 1 00:08:20.976 EAL: Detected lcore 56 as core 20 on socket 1 00:08:20.976 EAL: Detected lcore 57 as core 21 on socket 1 00:08:20.976 EAL: Detected lcore 58 as core 22 on socket 1 00:08:20.976 EAL: Detected lcore 59 as core 23 on socket 1 00:08:20.976 EAL: Detected lcore 60 as core 24 on socket 1 00:08:20.976 EAL: Detected lcore 61 as core 25 on socket 1 00:08:20.976 EAL: Detected lcore 62 as core 26 on socket 1 00:08:20.976 EAL: Detected lcore 63 as core 27 on socket 1 00:08:20.976 EAL: Detected lcore 64 as core 28 on socket 1 00:08:20.976 EAL: Detected lcore 65 as core 29 on socket 1 00:08:20.976 EAL: Detected lcore 66 as core 30 on socket 1 00:08:20.976 EAL: Detected lcore 67 as core 31 on socket 1 00:08:20.976 EAL: Detected lcore 68 as core 32 on socket 1 00:08:20.976 EAL: Detected lcore 69 as core 33 on socket 1 00:08:20.976 EAL: Detected lcore 70 as core 34 on socket 1 00:08:20.976 EAL: Detected lcore 71 as core 35 on socket 1 00:08:20.976 EAL: Detected lcore 72 as core 0 on socket 0 00:08:20.976 EAL: Detected lcore 73 as core 1 on socket 0 00:08:20.976 EAL: Detected lcore 74 as core 2 on socket 0 00:08:20.976 EAL: Detected lcore 75 as core 3 on socket 0 00:08:20.976 EAL: Detected lcore 76 as core 4 on socket 0 00:08:20.976 EAL: Detected lcore 77 as core 5 on socket 0 00:08:20.976 EAL: Detected lcore 78 as core 6 on socket 0 00:08:20.976 EAL: Detected lcore 79 as core 7 on socket 0 00:08:20.976 EAL: Detected lcore 80 as core 8 on socket 0 00:08:20.976 EAL: Detected lcore 81 as core 9 on socket 0 00:08:20.976 EAL: Detected lcore 82 as core 10 on socket 0 00:08:20.976 EAL: Detected lcore 83 as core 11 on socket 0 00:08:20.976 EAL: Detected lcore 84 as core 12 on socket 0 00:08:20.976 EAL: Detected lcore 85 as core 13 on socket 0 00:08:20.976 EAL: Detected lcore 86 as core 14 on socket 0 00:08:20.976 EAL: Detected lcore 87 as core 15 on socket 0 00:08:20.977 EAL: Detected lcore 88 as core 16 on socket 0 00:08:20.977 EAL: Detected lcore 89 as core 17 on socket 0 00:08:20.977 EAL: Detected lcore 90 as core 18 on socket 0 00:08:20.977 EAL: Detected lcore 91 as core 19 on socket 0 00:08:20.977 EAL: Detected lcore 92 as core 20 on socket 0 00:08:20.977 EAL: Detected lcore 93 as core 21 on socket 0 00:08:20.977 EAL: Detected lcore 94 as core 22 on socket 0 00:08:20.977 EAL: Detected lcore 95 as core 23 on socket 0 00:08:20.977 EAL: Detected lcore 96 as core 24 on socket 0 00:08:20.977 EAL: Detected lcore 97 as core 25 on socket 0 00:08:20.977 EAL: Detected lcore 98 as core 26 on socket 0 00:08:20.977 EAL: Detected lcore 99 as core 27 on socket 0 00:08:20.977 EAL: Detected lcore 100 as core 28 on socket 0 00:08:20.977 EAL: Detected lcore 101 as core 29 on socket 0 00:08:20.977 EAL: Detected lcore 102 as core 30 on socket 0 00:08:20.977 EAL: Detected lcore 103 as core 31 on socket 0 00:08:20.977 EAL: Detected lcore 104 as core 32 on socket 0 00:08:20.977 EAL: Detected lcore 105 as core 33 on socket 0 00:08:20.977 EAL: Detected lcore 106 as core 34 on socket 0 00:08:20.977 EAL: Detected lcore 107 as core 35 on socket 0 00:08:20.977 EAL: Detected lcore 108 as core 0 on socket 1 00:08:20.977 EAL: Detected lcore 109 as core 1 on socket 1 00:08:20.977 EAL: Detected lcore 110 as core 2 on socket 1 00:08:20.977 EAL: Detected lcore 111 as core 3 on socket 1 00:08:20.977 EAL: Detected lcore 112 as core 4 on socket 1 00:08:20.977 EAL: Detected lcore 113 as core 5 on socket 1 00:08:20.977 EAL: Detected lcore 114 as core 6 on socket 1 00:08:20.977 EAL: Detected lcore 115 as core 7 on socket 1 00:08:20.977 EAL: Detected lcore 116 as core 8 on socket 1 00:08:20.977 EAL: Detected lcore 117 as core 9 on socket 1 00:08:20.977 EAL: Detected lcore 118 as core 10 on socket 1 00:08:20.977 EAL: Detected lcore 119 as core 11 on socket 1 00:08:20.977 EAL: Detected lcore 120 as core 12 on socket 1 00:08:20.977 EAL: Detected lcore 121 as core 13 on socket 1 00:08:20.977 EAL: Detected lcore 122 as core 14 on socket 1 00:08:20.977 EAL: Detected lcore 123 as core 15 on socket 1 00:08:20.977 EAL: Detected lcore 124 as core 16 on socket 1 00:08:20.977 EAL: Detected lcore 125 as core 17 on socket 1 00:08:20.977 EAL: Detected lcore 126 as core 18 on socket 1 00:08:20.977 EAL: Detected lcore 127 as core 19 on socket 1 00:08:20.977 EAL: Skipped lcore 128 as core 20 on socket 1 00:08:20.977 EAL: Skipped lcore 129 as core 21 on socket 1 00:08:20.977 EAL: Skipped lcore 130 as core 22 on socket 1 00:08:20.977 EAL: Skipped lcore 131 as core 23 on socket 1 00:08:20.977 EAL: Skipped lcore 132 as core 24 on socket 1 00:08:20.977 EAL: Skipped lcore 133 as core 25 on socket 1 00:08:20.977 EAL: Skipped lcore 134 as core 26 on socket 1 00:08:20.977 EAL: Skipped lcore 135 as core 27 on socket 1 00:08:20.977 EAL: Skipped lcore 136 as core 28 on socket 1 00:08:20.977 EAL: Skipped lcore 137 as core 29 on socket 1 00:08:20.977 EAL: Skipped lcore 138 as core 30 on socket 1 00:08:20.977 EAL: Skipped lcore 139 as core 31 on socket 1 00:08:20.977 EAL: Skipped lcore 140 as core 32 on socket 1 00:08:20.977 EAL: Skipped lcore 141 as core 33 on socket 1 00:08:20.977 EAL: Skipped lcore 142 as core 34 on socket 1 00:08:20.977 EAL: Skipped lcore 143 as core 35 on socket 1 00:08:20.977 EAL: Maximum logical cores by configuration: 128 00:08:20.977 EAL: Detected CPU lcores: 128 00:08:20.977 EAL: Detected NUMA nodes: 2 00:08:20.977 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:20.977 EAL: Detected shared linkage of DPDK 00:08:20.977 EAL: No shared files mode enabled, IPC will be disabled 00:08:21.240 EAL: Bus pci wants IOVA as 'DC' 00:08:21.240 EAL: Buses did not request a specific IOVA mode. 00:08:21.240 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:21.240 EAL: Selected IOVA mode 'VA' 00:08:21.240 EAL: Probing VFIO support... 00:08:21.240 EAL: IOMMU type 1 (Type 1) is supported 00:08:21.240 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:21.240 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:21.240 EAL: VFIO support initialized 00:08:21.240 EAL: Ask a virtual area of 0x2e000 bytes 00:08:21.240 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:21.240 EAL: Setting up physically contiguous memory... 00:08:21.240 EAL: Setting maximum number of open files to 524288 00:08:21.240 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:21.240 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:21.240 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:21.240 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:21.240 EAL: Ask a virtual area of 0x61000 bytes 00:08:21.240 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:21.240 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:21.240 EAL: Ask a virtual area of 0x400000000 bytes 00:08:21.240 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:21.240 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:21.240 EAL: Hugepages will be freed exactly as allocated. 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: TSC frequency is ~2400000 KHz 00:08:21.240 EAL: Main lcore 0 is ready (tid=7f931d9c7a00;cpuset=[0]) 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 0 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 2MB 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:21.240 EAL: Mem event callback 'spdk:(nil)' registered 00:08:21.240 00:08:21.240 00:08:21.240 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.240 http://cunit.sourceforge.net/ 00:08:21.240 00:08:21.240 00:08:21.240 Suite: components_suite 00:08:21.240 Test: vtophys_malloc_test ...passed 00:08:21.240 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 4MB 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was shrunk by 4MB 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 6MB 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was shrunk by 6MB 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 10MB 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was shrunk by 10MB 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 18MB 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was shrunk by 18MB 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 34MB 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was shrunk by 34MB 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 66MB 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was shrunk by 66MB 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was expanded by 130MB 00:08:21.240 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.240 EAL: request: mp_malloc_sync 00:08:21.240 EAL: No shared files mode enabled, IPC is disabled 00:08:21.240 EAL: Heap on socket 0 was shrunk by 130MB 00:08:21.240 EAL: Trying to obtain current memory policy. 00:08:21.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.240 EAL: Restoring previous memory policy: 4 00:08:21.241 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.241 EAL: request: mp_malloc_sync 00:08:21.241 EAL: No shared files mode enabled, IPC is disabled 00:08:21.241 EAL: Heap on socket 0 was expanded by 258MB 00:08:21.241 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.241 EAL: request: mp_malloc_sync 00:08:21.241 EAL: No shared files mode enabled, IPC is disabled 00:08:21.241 EAL: Heap on socket 0 was shrunk by 258MB 00:08:21.241 EAL: Trying to obtain current memory policy. 00:08:21.241 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.502 EAL: Restoring previous memory policy: 4 00:08:21.502 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.502 EAL: request: mp_malloc_sync 00:08:21.502 EAL: No shared files mode enabled, IPC is disabled 00:08:21.502 EAL: Heap on socket 0 was expanded by 514MB 00:08:21.502 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.502 EAL: request: mp_malloc_sync 00:08:21.502 EAL: No shared files mode enabled, IPC is disabled 00:08:21.502 EAL: Heap on socket 0 was shrunk by 514MB 00:08:21.502 EAL: Trying to obtain current memory policy. 00:08:21.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:21.763 EAL: Restoring previous memory policy: 4 00:08:21.763 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.763 EAL: request: mp_malloc_sync 00:08:21.763 EAL: No shared files mode enabled, IPC is disabled 00:08:21.763 EAL: Heap on socket 0 was expanded by 1026MB 00:08:21.763 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.024 EAL: request: mp_malloc_sync 00:08:22.024 EAL: No shared files mode enabled, IPC is disabled 00:08:22.024 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:22.024 passed 00:08:22.024 00:08:22.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.024 suites 1 1 n/a 0 0 00:08:22.024 tests 2 2 2 0 0 00:08:22.024 asserts 497 497 497 0 n/a 00:08:22.024 00:08:22.024 Elapsed time = 0.689 seconds 00:08:22.024 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.024 EAL: request: mp_malloc_sync 00:08:22.024 EAL: No shared files mode enabled, IPC is disabled 00:08:22.024 EAL: Heap on socket 0 was shrunk by 2MB 00:08:22.024 EAL: No shared files mode enabled, IPC is disabled 00:08:22.024 EAL: No shared files mode enabled, IPC is disabled 00:08:22.024 EAL: No shared files mode enabled, IPC is disabled 00:08:22.024 00:08:22.024 real 0m0.854s 00:08:22.024 user 0m0.444s 00:08:22.024 sys 0m0.369s 00:08:22.024 06:19:41 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.024 06:19:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:22.024 ************************************ 00:08:22.024 END TEST env_vtophys 00:08:22.024 ************************************ 00:08:22.024 06:19:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:22.024 06:19:41 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:22.024 06:19:41 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.024 06:19:41 env -- common/autotest_common.sh@10 -- # set +x 00:08:22.024 ************************************ 00:08:22.024 START TEST env_pci 00:08:22.024 ************************************ 00:08:22.024 06:19:41 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:22.024 00:08:22.024 00:08:22.024 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.024 http://cunit.sourceforge.net/ 00:08:22.024 00:08:22.024 00:08:22.024 Suite: pci 00:08:22.024 Test: pci_hook ...[2024-11-20 06:19:41.790996] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2470046 has claimed it 00:08:22.024 EAL: Cannot find device (10000:00:01.0) 00:08:22.024 EAL: Failed to attach device on primary process 00:08:22.024 passed 00:08:22.024 00:08:22.024 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.024 suites 1 1 n/a 0 0 00:08:22.024 tests 1 1 1 0 0 00:08:22.024 asserts 25 25 25 0 n/a 00:08:22.024 00:08:22.024 Elapsed time = 0.031 seconds 00:08:22.024 00:08:22.024 real 0m0.052s 00:08:22.024 user 0m0.015s 00:08:22.024 sys 0m0.036s 00:08:22.024 06:19:41 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.024 06:19:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:22.024 ************************************ 00:08:22.024 END TEST env_pci 00:08:22.024 ************************************ 00:08:22.024 06:19:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:22.024 06:19:41 env -- env/env.sh@15 -- # uname 00:08:22.024 06:19:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:22.025 06:19:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:22.025 06:19:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:22.025 06:19:41 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:22.025 06:19:41 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.025 06:19:41 env -- common/autotest_common.sh@10 -- # set +x 00:08:22.025 ************************************ 00:08:22.025 START TEST env_dpdk_post_init 00:08:22.025 ************************************ 00:08:22.025 06:19:41 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:22.286 EAL: Detected CPU lcores: 128 00:08:22.286 EAL: Detected NUMA nodes: 2 00:08:22.286 EAL: Detected shared linkage of DPDK 00:08:22.286 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:22.286 EAL: Selected IOVA mode 'VA' 00:08:22.286 EAL: VFIO support initialized 00:08:22.286 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:22.286 EAL: Using IOMMU type 1 (Type 1) 00:08:22.286 EAL: Ignore mapping IO port bar(1) 00:08:22.547 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:08:22.547 EAL: Ignore mapping IO port bar(1) 00:08:22.807 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:08:22.807 EAL: Ignore mapping IO port bar(1) 00:08:23.068 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:08:23.068 EAL: Ignore mapping IO port bar(1) 00:08:23.068 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:08:23.329 EAL: Ignore mapping IO port bar(1) 00:08:23.329 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:08:23.589 EAL: Ignore mapping IO port bar(1) 00:08:23.589 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:08:23.849 EAL: Ignore mapping IO port bar(1) 00:08:23.849 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:08:23.849 EAL: Ignore mapping IO port bar(1) 00:08:24.110 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:08:24.371 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:08:24.371 EAL: Ignore mapping IO port bar(1) 00:08:24.632 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:08:24.632 EAL: Ignore mapping IO port bar(1) 00:08:24.632 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:08:24.893 EAL: Ignore mapping IO port bar(1) 00:08:24.893 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:08:25.156 EAL: Ignore mapping IO port bar(1) 00:08:25.156 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:08:25.417 EAL: Ignore mapping IO port bar(1) 00:08:25.417 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:08:25.417 EAL: Ignore mapping IO port bar(1) 00:08:25.678 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:08:25.678 EAL: Ignore mapping IO port bar(1) 00:08:25.939 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:08:25.939 EAL: Ignore mapping IO port bar(1) 00:08:26.200 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:08:26.200 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:08:26.200 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:08:26.200 Starting DPDK initialization... 00:08:26.200 Starting SPDK post initialization... 00:08:26.200 SPDK NVMe probe 00:08:26.200 Attaching to 0000:65:00.0 00:08:26.200 Attached to 0000:65:00.0 00:08:26.200 Cleaning up... 00:08:28.209 00:08:28.209 real 0m5.747s 00:08:28.209 user 0m0.112s 00:08:28.209 sys 0m0.191s 00:08:28.209 06:19:47 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.209 06:19:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:28.209 ************************************ 00:08:28.209 END TEST env_dpdk_post_init 00:08:28.209 ************************************ 00:08:28.209 06:19:47 env -- env/env.sh@26 -- # uname 00:08:28.209 06:19:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:28.209 06:19:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:28.209 06:19:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:28.209 06:19:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.209 06:19:47 env -- common/autotest_common.sh@10 -- # set +x 00:08:28.209 ************************************ 00:08:28.209 START TEST env_mem_callbacks 00:08:28.209 ************************************ 00:08:28.209 06:19:47 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:28.209 EAL: Detected CPU lcores: 128 00:08:28.209 EAL: Detected NUMA nodes: 2 00:08:28.209 EAL: Detected shared linkage of DPDK 00:08:28.209 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:28.209 EAL: Selected IOVA mode 'VA' 00:08:28.209 EAL: VFIO support initialized 00:08:28.209 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:28.209 00:08:28.209 00:08:28.209 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.209 http://cunit.sourceforge.net/ 00:08:28.209 00:08:28.209 00:08:28.209 Suite: memory 00:08:28.209 Test: test ... 00:08:28.209 register 0x200000200000 2097152 00:08:28.209 malloc 3145728 00:08:28.209 register 0x200000400000 4194304 00:08:28.209 buf 0x200000500000 len 3145728 PASSED 00:08:28.209 malloc 64 00:08:28.209 buf 0x2000004fff40 len 64 PASSED 00:08:28.209 malloc 4194304 00:08:28.209 register 0x200000800000 6291456 00:08:28.209 buf 0x200000a00000 len 4194304 PASSED 00:08:28.209 free 0x200000500000 3145728 00:08:28.209 free 0x2000004fff40 64 00:08:28.209 unregister 0x200000400000 4194304 PASSED 00:08:28.209 free 0x200000a00000 4194304 00:08:28.209 unregister 0x200000800000 6291456 PASSED 00:08:28.209 malloc 8388608 00:08:28.209 register 0x200000400000 10485760 00:08:28.209 buf 0x200000600000 len 8388608 PASSED 00:08:28.209 free 0x200000600000 8388608 00:08:28.209 unregister 0x200000400000 10485760 PASSED 00:08:28.209 passed 00:08:28.209 00:08:28.209 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.209 suites 1 1 n/a 0 0 00:08:28.209 tests 1 1 1 0 0 00:08:28.209 asserts 15 15 15 0 n/a 00:08:28.209 00:08:28.209 Elapsed time = 0.010 seconds 00:08:28.209 00:08:28.209 real 0m0.068s 00:08:28.209 user 0m0.030s 00:08:28.209 sys 0m0.038s 00:08:28.209 06:19:47 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.209 06:19:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:28.209 ************************************ 00:08:28.209 END TEST env_mem_callbacks 00:08:28.209 ************************************ 00:08:28.209 00:08:28.209 real 0m7.554s 00:08:28.209 user 0m1.046s 00:08:28.209 sys 0m1.053s 00:08:28.209 06:19:47 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.209 06:19:47 env -- common/autotest_common.sh@10 -- # set +x 00:08:28.209 ************************************ 00:08:28.209 END TEST env 00:08:28.209 ************************************ 00:08:28.209 06:19:47 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:28.209 06:19:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:28.209 06:19:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.209 06:19:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.209 ************************************ 00:08:28.209 START TEST rpc 00:08:28.209 ************************************ 00:08:28.209 06:19:47 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:28.209 * Looking for test storage... 00:08:28.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:28.209 06:19:48 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.209 06:19:48 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.209 06:19:48 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.471 06:19:48 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.471 06:19:48 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.471 06:19:48 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.471 06:19:48 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.471 06:19:48 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.471 06:19:48 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.471 06:19:48 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.471 06:19:48 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:28.471 06:19:48 rpc -- scripts/common.sh@345 -- # : 1 00:08:28.471 06:19:48 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.471 06:19:48 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.471 06:19:48 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:28.471 06:19:48 rpc -- scripts/common.sh@353 -- # local d=1 00:08:28.471 06:19:48 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.471 06:19:48 rpc -- scripts/common.sh@355 -- # echo 1 00:08:28.471 06:19:48 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.471 06:19:48 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@353 -- # local d=2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.471 06:19:48 rpc -- scripts/common.sh@355 -- # echo 2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.471 06:19:48 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.471 06:19:48 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.471 06:19:48 rpc -- scripts/common.sh@368 -- # return 0 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.471 --rc genhtml_branch_coverage=1 00:08:28.471 --rc genhtml_function_coverage=1 00:08:28.471 --rc genhtml_legend=1 00:08:28.471 --rc geninfo_all_blocks=1 00:08:28.471 --rc geninfo_unexecuted_blocks=1 00:08:28.471 00:08:28.471 ' 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.471 --rc genhtml_branch_coverage=1 00:08:28.471 --rc genhtml_function_coverage=1 00:08:28.471 --rc genhtml_legend=1 00:08:28.471 --rc geninfo_all_blocks=1 00:08:28.471 --rc geninfo_unexecuted_blocks=1 00:08:28.471 00:08:28.471 ' 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.471 --rc genhtml_branch_coverage=1 00:08:28.471 --rc genhtml_function_coverage=1 00:08:28.471 --rc genhtml_legend=1 00:08:28.471 --rc geninfo_all_blocks=1 00:08:28.471 --rc geninfo_unexecuted_blocks=1 00:08:28.471 00:08:28.471 ' 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.471 --rc genhtml_branch_coverage=1 00:08:28.471 --rc genhtml_function_coverage=1 00:08:28.471 --rc genhtml_legend=1 00:08:28.471 --rc geninfo_all_blocks=1 00:08:28.471 --rc geninfo_unexecuted_blocks=1 00:08:28.471 00:08:28.471 ' 00:08:28.471 06:19:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2471374 00:08:28.471 06:19:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:28.471 06:19:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2471374 00:08:28.471 06:19:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@833 -- # '[' -z 2471374 ']' 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.471 06:19:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.471 [2024-11-20 06:19:48.208696] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:28.471 [2024-11-20 06:19:48.208770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471374 ] 00:08:28.471 [2024-11-20 06:19:48.304437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.471 [2024-11-20 06:19:48.356264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:28.471 [2024-11-20 06:19:48.356323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2471374' to capture a snapshot of events at runtime. 00:08:28.471 [2024-11-20 06:19:48.356333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.471 [2024-11-20 06:19:48.356340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.471 [2024-11-20 06:19:48.356347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2471374 for offline analysis/debug. 00:08:28.471 [2024-11-20 06:19:48.357197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.415 06:19:49 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.415 06:19:49 rpc -- common/autotest_common.sh@866 -- # return 0 00:08:29.415 06:19:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:29.415 06:19:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:29.415 06:19:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:29.415 06:19:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:29.415 06:19:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:29.415 06:19:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.415 06:19:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.415 ************************************ 00:08:29.415 START TEST rpc_integrity 00:08:29.415 ************************************ 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.415 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.415 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:29.415 { 00:08:29.415 "name": "Malloc0", 00:08:29.415 "aliases": [ 00:08:29.415 "347d1e89-c2e4-4a99-92af-3d6982cd8371" 00:08:29.415 ], 00:08:29.415 "product_name": "Malloc disk", 00:08:29.415 "block_size": 512, 00:08:29.415 "num_blocks": 16384, 00:08:29.415 "uuid": "347d1e89-c2e4-4a99-92af-3d6982cd8371", 00:08:29.415 "assigned_rate_limits": { 00:08:29.415 "rw_ios_per_sec": 0, 00:08:29.415 "rw_mbytes_per_sec": 0, 00:08:29.415 "r_mbytes_per_sec": 0, 00:08:29.415 "w_mbytes_per_sec": 0 00:08:29.415 }, 00:08:29.415 "claimed": false, 00:08:29.415 "zoned": false, 00:08:29.415 "supported_io_types": { 00:08:29.415 "read": true, 00:08:29.415 "write": true, 00:08:29.415 "unmap": true, 00:08:29.415 "flush": true, 00:08:29.415 "reset": true, 00:08:29.415 "nvme_admin": false, 00:08:29.415 "nvme_io": false, 00:08:29.415 "nvme_io_md": false, 00:08:29.415 "write_zeroes": true, 00:08:29.415 "zcopy": true, 00:08:29.415 "get_zone_info": false, 00:08:29.415 "zone_management": false, 00:08:29.415 "zone_append": false, 00:08:29.415 "compare": false, 00:08:29.415 "compare_and_write": false, 00:08:29.415 "abort": true, 00:08:29.415 "seek_hole": false, 00:08:29.415 "seek_data": false, 00:08:29.415 "copy": true, 00:08:29.415 "nvme_iov_md": false 00:08:29.415 }, 00:08:29.415 "memory_domains": [ 00:08:29.415 { 00:08:29.415 "dma_device_id": "system", 00:08:29.415 "dma_device_type": 1 00:08:29.415 }, 00:08:29.415 { 00:08:29.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.415 "dma_device_type": 2 00:08:29.415 } 00:08:29.415 ], 00:08:29.415 "driver_specific": {} 00:08:29.416 } 00:08:29.416 ]' 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.416 [2024-11-20 06:19:49.216679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:29.416 [2024-11-20 06:19:49.216732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.416 [2024-11-20 06:19:49.216753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af2800 00:08:29.416 [2024-11-20 06:19:49.216762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.416 [2024-11-20 06:19:49.218332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.416 [2024-11-20 06:19:49.218371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:29.416 Passthru0 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:29.416 { 00:08:29.416 "name": "Malloc0", 00:08:29.416 "aliases": [ 00:08:29.416 "347d1e89-c2e4-4a99-92af-3d6982cd8371" 00:08:29.416 ], 00:08:29.416 "product_name": "Malloc disk", 00:08:29.416 "block_size": 512, 00:08:29.416 "num_blocks": 16384, 00:08:29.416 "uuid": "347d1e89-c2e4-4a99-92af-3d6982cd8371", 00:08:29.416 "assigned_rate_limits": { 00:08:29.416 "rw_ios_per_sec": 0, 00:08:29.416 "rw_mbytes_per_sec": 0, 00:08:29.416 "r_mbytes_per_sec": 0, 00:08:29.416 "w_mbytes_per_sec": 0 00:08:29.416 }, 00:08:29.416 "claimed": true, 00:08:29.416 "claim_type": "exclusive_write", 00:08:29.416 "zoned": false, 00:08:29.416 "supported_io_types": { 00:08:29.416 "read": true, 00:08:29.416 "write": true, 00:08:29.416 "unmap": true, 00:08:29.416 "flush": true, 00:08:29.416 "reset": true, 00:08:29.416 "nvme_admin": false, 00:08:29.416 "nvme_io": false, 00:08:29.416 "nvme_io_md": false, 00:08:29.416 "write_zeroes": true, 00:08:29.416 "zcopy": true, 00:08:29.416 "get_zone_info": false, 00:08:29.416 "zone_management": false, 00:08:29.416 "zone_append": false, 00:08:29.416 "compare": false, 00:08:29.416 "compare_and_write": false, 00:08:29.416 "abort": true, 00:08:29.416 "seek_hole": false, 00:08:29.416 "seek_data": false, 00:08:29.416 "copy": true, 00:08:29.416 "nvme_iov_md": false 00:08:29.416 }, 00:08:29.416 "memory_domains": [ 00:08:29.416 { 00:08:29.416 "dma_device_id": "system", 00:08:29.416 "dma_device_type": 1 00:08:29.416 }, 00:08:29.416 { 00:08:29.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.416 "dma_device_type": 2 00:08:29.416 } 00:08:29.416 ], 00:08:29.416 "driver_specific": {} 00:08:29.416 }, 00:08:29.416 { 00:08:29.416 "name": "Passthru0", 00:08:29.416 "aliases": [ 00:08:29.416 "aff61052-ece8-55de-953b-2a7748f11b7a" 00:08:29.416 ], 00:08:29.416 "product_name": "passthru", 00:08:29.416 "block_size": 512, 00:08:29.416 "num_blocks": 16384, 00:08:29.416 "uuid": "aff61052-ece8-55de-953b-2a7748f11b7a", 00:08:29.416 "assigned_rate_limits": { 00:08:29.416 "rw_ios_per_sec": 0, 00:08:29.416 "rw_mbytes_per_sec": 0, 00:08:29.416 "r_mbytes_per_sec": 0, 00:08:29.416 "w_mbytes_per_sec": 0 00:08:29.416 }, 00:08:29.416 "claimed": false, 00:08:29.416 "zoned": false, 00:08:29.416 "supported_io_types": { 00:08:29.416 "read": true, 00:08:29.416 "write": true, 00:08:29.416 "unmap": true, 00:08:29.416 "flush": true, 00:08:29.416 "reset": true, 00:08:29.416 "nvme_admin": false, 00:08:29.416 "nvme_io": false, 00:08:29.416 "nvme_io_md": false, 00:08:29.416 "write_zeroes": true, 00:08:29.416 "zcopy": true, 00:08:29.416 "get_zone_info": false, 00:08:29.416 "zone_management": false, 00:08:29.416 "zone_append": false, 00:08:29.416 "compare": false, 00:08:29.416 "compare_and_write": false, 00:08:29.416 "abort": true, 00:08:29.416 "seek_hole": false, 00:08:29.416 "seek_data": false, 00:08:29.416 "copy": true, 00:08:29.416 "nvme_iov_md": false 00:08:29.416 }, 00:08:29.416 "memory_domains": [ 00:08:29.416 { 00:08:29.416 "dma_device_id": "system", 00:08:29.416 "dma_device_type": 1 00:08:29.416 }, 00:08:29.416 { 00:08:29.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.416 "dma_device_type": 2 00:08:29.416 } 00:08:29.416 ], 00:08:29.416 "driver_specific": { 00:08:29.416 "passthru": { 00:08:29.416 "name": "Passthru0", 00:08:29.416 "base_bdev_name": "Malloc0" 00:08:29.416 } 00:08:29.416 } 00:08:29.416 } 00:08:29.416 ]' 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.416 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.416 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:29.677 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:29.677 06:19:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:29.677 00:08:29.677 real 0m0.306s 00:08:29.677 user 0m0.185s 00:08:29.677 sys 0m0.044s 00:08:29.677 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.677 06:19:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:29.677 ************************************ 00:08:29.677 END TEST rpc_integrity 00:08:29.677 ************************************ 00:08:29.677 06:19:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:29.677 06:19:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:29.677 06:19:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.677 06:19:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.677 ************************************ 00:08:29.677 START TEST rpc_plugins 00:08:29.677 ************************************ 00:08:29.677 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:08:29.677 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:29.677 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.677 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:29.677 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.677 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:29.677 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:29.677 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.677 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:29.677 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.677 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:29.677 { 00:08:29.677 "name": "Malloc1", 00:08:29.677 "aliases": [ 00:08:29.677 "2094e821-1d32-45ce-9b92-e7b7c5a2b0f5" 00:08:29.677 ], 00:08:29.677 "product_name": "Malloc disk", 00:08:29.677 "block_size": 4096, 00:08:29.677 "num_blocks": 256, 00:08:29.677 "uuid": "2094e821-1d32-45ce-9b92-e7b7c5a2b0f5", 00:08:29.677 "assigned_rate_limits": { 00:08:29.677 "rw_ios_per_sec": 0, 00:08:29.677 "rw_mbytes_per_sec": 0, 00:08:29.677 "r_mbytes_per_sec": 0, 00:08:29.677 "w_mbytes_per_sec": 0 00:08:29.677 }, 00:08:29.677 "claimed": false, 00:08:29.677 "zoned": false, 00:08:29.677 "supported_io_types": { 00:08:29.677 "read": true, 00:08:29.677 "write": true, 00:08:29.677 "unmap": true, 00:08:29.677 "flush": true, 00:08:29.677 "reset": true, 00:08:29.677 "nvme_admin": false, 00:08:29.677 "nvme_io": false, 00:08:29.677 "nvme_io_md": false, 00:08:29.677 "write_zeroes": true, 00:08:29.677 "zcopy": true, 00:08:29.677 "get_zone_info": false, 00:08:29.677 "zone_management": false, 00:08:29.677 "zone_append": false, 00:08:29.677 "compare": false, 00:08:29.677 "compare_and_write": false, 00:08:29.677 "abort": true, 00:08:29.677 "seek_hole": false, 00:08:29.677 "seek_data": false, 00:08:29.677 "copy": true, 00:08:29.677 "nvme_iov_md": false 00:08:29.677 }, 00:08:29.677 "memory_domains": [ 00:08:29.677 { 00:08:29.678 "dma_device_id": "system", 00:08:29.678 "dma_device_type": 1 00:08:29.678 }, 00:08:29.678 { 00:08:29.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.678 "dma_device_type": 2 00:08:29.678 } 00:08:29.678 ], 00:08:29.678 "driver_specific": {} 00:08:29.678 } 00:08:29.678 ]' 00:08:29.678 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:29.678 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:29.678 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:29.678 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.678 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:29.678 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.678 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:29.678 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.678 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:29.678 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.678 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:29.678 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:29.939 06:19:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:29.939 00:08:29.939 real 0m0.151s 00:08:29.939 user 0m0.089s 00:08:29.939 sys 0m0.026s 00:08:29.939 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.939 06:19:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:29.939 ************************************ 00:08:29.939 END TEST rpc_plugins 00:08:29.939 ************************************ 00:08:29.939 06:19:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:29.939 06:19:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:29.939 06:19:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.939 06:19:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.939 ************************************ 00:08:29.939 START TEST rpc_trace_cmd_test 00:08:29.939 ************************************ 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:29.939 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2471374", 00:08:29.939 "tpoint_group_mask": "0x8", 00:08:29.939 "iscsi_conn": { 00:08:29.939 "mask": "0x2", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "scsi": { 00:08:29.939 "mask": "0x4", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "bdev": { 00:08:29.939 "mask": "0x8", 00:08:29.939 "tpoint_mask": "0xffffffffffffffff" 00:08:29.939 }, 00:08:29.939 "nvmf_rdma": { 00:08:29.939 "mask": "0x10", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "nvmf_tcp": { 00:08:29.939 "mask": "0x20", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "ftl": { 00:08:29.939 "mask": "0x40", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "blobfs": { 00:08:29.939 "mask": "0x80", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "dsa": { 00:08:29.939 "mask": "0x200", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "thread": { 00:08:29.939 "mask": "0x400", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "nvme_pcie": { 00:08:29.939 "mask": "0x800", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "iaa": { 00:08:29.939 "mask": "0x1000", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "nvme_tcp": { 00:08:29.939 "mask": "0x2000", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "bdev_nvme": { 00:08:29.939 "mask": "0x4000", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "sock": { 00:08:29.939 "mask": "0x8000", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "blob": { 00:08:29.939 "mask": "0x10000", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "bdev_raid": { 00:08:29.939 "mask": "0x20000", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 }, 00:08:29.939 "scheduler": { 00:08:29.939 "mask": "0x40000", 00:08:29.939 "tpoint_mask": "0x0" 00:08:29.939 } 00:08:29.939 }' 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:29.939 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:30.254 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:30.254 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:30.254 06:19:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:30.254 00:08:30.254 real 0m0.255s 00:08:30.254 user 0m0.212s 00:08:30.254 sys 0m0.032s 00:08:30.254 06:19:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.254 06:19:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.254 ************************************ 00:08:30.254 END TEST rpc_trace_cmd_test 00:08:30.254 ************************************ 00:08:30.254 06:19:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:30.254 06:19:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:30.254 06:19:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:30.254 06:19:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:30.254 06:19:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.254 06:19:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.254 ************************************ 00:08:30.254 START TEST rpc_daemon_integrity 00:08:30.254 ************************************ 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:30.254 { 00:08:30.254 "name": "Malloc2", 00:08:30.254 "aliases": [ 00:08:30.254 "1edb9674-7d42-4d66-b53b-0ba2f952a28e" 00:08:30.254 ], 00:08:30.254 "product_name": "Malloc disk", 00:08:30.254 "block_size": 512, 00:08:30.254 "num_blocks": 16384, 00:08:30.254 "uuid": "1edb9674-7d42-4d66-b53b-0ba2f952a28e", 00:08:30.254 "assigned_rate_limits": { 00:08:30.254 "rw_ios_per_sec": 0, 00:08:30.254 "rw_mbytes_per_sec": 0, 00:08:30.254 "r_mbytes_per_sec": 0, 00:08:30.254 "w_mbytes_per_sec": 0 00:08:30.254 }, 00:08:30.254 "claimed": false, 00:08:30.254 "zoned": false, 00:08:30.254 "supported_io_types": { 00:08:30.254 "read": true, 00:08:30.254 "write": true, 00:08:30.254 "unmap": true, 00:08:30.254 "flush": true, 00:08:30.254 "reset": true, 00:08:30.254 "nvme_admin": false, 00:08:30.254 "nvme_io": false, 00:08:30.254 "nvme_io_md": false, 00:08:30.254 "write_zeroes": true, 00:08:30.254 "zcopy": true, 00:08:30.254 "get_zone_info": false, 00:08:30.254 "zone_management": false, 00:08:30.254 "zone_append": false, 00:08:30.254 "compare": false, 00:08:30.254 "compare_and_write": false, 00:08:30.254 "abort": true, 00:08:30.254 "seek_hole": false, 00:08:30.254 "seek_data": false, 00:08:30.254 "copy": true, 00:08:30.254 "nvme_iov_md": false 00:08:30.254 }, 00:08:30.254 "memory_domains": [ 00:08:30.254 { 00:08:30.254 "dma_device_id": "system", 00:08:30.254 "dma_device_type": 1 00:08:30.254 }, 00:08:30.254 { 00:08:30.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.254 "dma_device_type": 2 00:08:30.254 } 00:08:30.254 ], 00:08:30.254 "driver_specific": {} 00:08:30.254 } 00:08:30.254 ]' 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.254 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.516 [2024-11-20 06:19:50.171326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:30.516 [2024-11-20 06:19:50.171382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.516 [2024-11-20 06:19:50.171402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a3f550 00:08:30.516 [2024-11-20 06:19:50.171410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.516 [2024-11-20 06:19:50.173049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.516 [2024-11-20 06:19:50.173086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:30.516 Passthru0 00:08:30.516 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.516 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:30.516 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:30.517 { 00:08:30.517 "name": "Malloc2", 00:08:30.517 "aliases": [ 00:08:30.517 "1edb9674-7d42-4d66-b53b-0ba2f952a28e" 00:08:30.517 ], 00:08:30.517 "product_name": "Malloc disk", 00:08:30.517 "block_size": 512, 00:08:30.517 "num_blocks": 16384, 00:08:30.517 "uuid": "1edb9674-7d42-4d66-b53b-0ba2f952a28e", 00:08:30.517 "assigned_rate_limits": { 00:08:30.517 "rw_ios_per_sec": 0, 00:08:30.517 "rw_mbytes_per_sec": 0, 00:08:30.517 "r_mbytes_per_sec": 0, 00:08:30.517 "w_mbytes_per_sec": 0 00:08:30.517 }, 00:08:30.517 "claimed": true, 00:08:30.517 "claim_type": "exclusive_write", 00:08:30.517 "zoned": false, 00:08:30.517 "supported_io_types": { 00:08:30.517 "read": true, 00:08:30.517 "write": true, 00:08:30.517 "unmap": true, 00:08:30.517 "flush": true, 00:08:30.517 "reset": true, 00:08:30.517 "nvme_admin": false, 00:08:30.517 "nvme_io": false, 00:08:30.517 "nvme_io_md": false, 00:08:30.517 "write_zeroes": true, 00:08:30.517 "zcopy": true, 00:08:30.517 "get_zone_info": false, 00:08:30.517 "zone_management": false, 00:08:30.517 "zone_append": false, 00:08:30.517 "compare": false, 00:08:30.517 "compare_and_write": false, 00:08:30.517 "abort": true, 00:08:30.517 "seek_hole": false, 00:08:30.517 "seek_data": false, 00:08:30.517 "copy": true, 00:08:30.517 "nvme_iov_md": false 00:08:30.517 }, 00:08:30.517 "memory_domains": [ 00:08:30.517 { 00:08:30.517 "dma_device_id": "system", 00:08:30.517 "dma_device_type": 1 00:08:30.517 }, 00:08:30.517 { 00:08:30.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.517 "dma_device_type": 2 00:08:30.517 } 00:08:30.517 ], 00:08:30.517 "driver_specific": {} 00:08:30.517 }, 00:08:30.517 { 00:08:30.517 "name": "Passthru0", 00:08:30.517 "aliases": [ 00:08:30.517 "0ea07fb1-130f-56fc-80c1-8eb6b4c9fceb" 00:08:30.517 ], 00:08:30.517 "product_name": "passthru", 00:08:30.517 "block_size": 512, 00:08:30.517 "num_blocks": 16384, 00:08:30.517 "uuid": "0ea07fb1-130f-56fc-80c1-8eb6b4c9fceb", 00:08:30.517 "assigned_rate_limits": { 00:08:30.517 "rw_ios_per_sec": 0, 00:08:30.517 "rw_mbytes_per_sec": 0, 00:08:30.517 "r_mbytes_per_sec": 0, 00:08:30.517 "w_mbytes_per_sec": 0 00:08:30.517 }, 00:08:30.517 "claimed": false, 00:08:30.517 "zoned": false, 00:08:30.517 "supported_io_types": { 00:08:30.517 "read": true, 00:08:30.517 "write": true, 00:08:30.517 "unmap": true, 00:08:30.517 "flush": true, 00:08:30.517 "reset": true, 00:08:30.517 "nvme_admin": false, 00:08:30.517 "nvme_io": false, 00:08:30.517 "nvme_io_md": false, 00:08:30.517 "write_zeroes": true, 00:08:30.517 "zcopy": true, 00:08:30.517 "get_zone_info": false, 00:08:30.517 "zone_management": false, 00:08:30.517 "zone_append": false, 00:08:30.517 "compare": false, 00:08:30.517 "compare_and_write": false, 00:08:30.517 "abort": true, 00:08:30.517 "seek_hole": false, 00:08:30.517 "seek_data": false, 00:08:30.517 "copy": true, 00:08:30.517 "nvme_iov_md": false 00:08:30.517 }, 00:08:30.517 "memory_domains": [ 00:08:30.517 { 00:08:30.517 "dma_device_id": "system", 00:08:30.517 "dma_device_type": 1 00:08:30.517 }, 00:08:30.517 { 00:08:30.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.517 "dma_device_type": 2 00:08:30.517 } 00:08:30.517 ], 00:08:30.517 "driver_specific": { 00:08:30.517 "passthru": { 00:08:30.517 "name": "Passthru0", 00:08:30.517 "base_bdev_name": "Malloc2" 00:08:30.517 } 00:08:30.517 } 00:08:30.517 } 00:08:30.517 ]' 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:30.517 00:08:30.517 real 0m0.310s 00:08:30.517 user 0m0.192s 00:08:30.517 sys 0m0.053s 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.517 06:19:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:30.517 ************************************ 00:08:30.517 END TEST rpc_daemon_integrity 00:08:30.517 ************************************ 00:08:30.517 06:19:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:30.517 06:19:50 rpc -- rpc/rpc.sh@84 -- # killprocess 2471374 00:08:30.517 06:19:50 rpc -- common/autotest_common.sh@952 -- # '[' -z 2471374 ']' 00:08:30.517 06:19:50 rpc -- common/autotest_common.sh@956 -- # kill -0 2471374 00:08:30.517 06:19:50 rpc -- common/autotest_common.sh@957 -- # uname 00:08:30.517 06:19:50 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.517 06:19:50 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2471374 00:08:30.779 06:19:50 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:30.779 06:19:50 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:30.779 06:19:50 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2471374' 00:08:30.779 killing process with pid 2471374 00:08:30.779 06:19:50 rpc -- common/autotest_common.sh@971 -- # kill 2471374 00:08:30.779 06:19:50 rpc -- common/autotest_common.sh@976 -- # wait 2471374 00:08:30.779 00:08:30.779 real 0m2.737s 00:08:30.779 user 0m3.479s 00:08:30.779 sys 0m0.855s 00:08:30.779 06:19:50 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.779 06:19:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.779 ************************************ 00:08:30.779 END TEST rpc 00:08:30.779 ************************************ 00:08:31.040 06:19:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:31.040 06:19:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:31.040 06:19:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.040 06:19:50 -- common/autotest_common.sh@10 -- # set +x 00:08:31.040 ************************************ 00:08:31.040 START TEST skip_rpc 00:08:31.040 ************************************ 00:08:31.040 06:19:50 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:31.040 * Looking for test storage... 00:08:31.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:31.040 06:19:50 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:31.040 06:19:50 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:31.040 06:19:50 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:31.040 06:19:50 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.040 06:19:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.301 06:19:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.301 --rc genhtml_branch_coverage=1 00:08:31.301 --rc genhtml_function_coverage=1 00:08:31.301 --rc genhtml_legend=1 00:08:31.301 --rc geninfo_all_blocks=1 00:08:31.301 --rc geninfo_unexecuted_blocks=1 00:08:31.301 00:08:31.301 ' 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.301 --rc genhtml_branch_coverage=1 00:08:31.301 --rc genhtml_function_coverage=1 00:08:31.301 --rc genhtml_legend=1 00:08:31.301 --rc geninfo_all_blocks=1 00:08:31.301 --rc geninfo_unexecuted_blocks=1 00:08:31.301 00:08:31.301 ' 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.301 --rc genhtml_branch_coverage=1 00:08:31.301 --rc genhtml_function_coverage=1 00:08:31.301 --rc genhtml_legend=1 00:08:31.301 --rc geninfo_all_blocks=1 00:08:31.301 --rc geninfo_unexecuted_blocks=1 00:08:31.301 00:08:31.301 ' 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:31.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.301 --rc genhtml_branch_coverage=1 00:08:31.301 --rc genhtml_function_coverage=1 00:08:31.301 --rc genhtml_legend=1 00:08:31.301 --rc geninfo_all_blocks=1 00:08:31.301 --rc geninfo_unexecuted_blocks=1 00:08:31.301 00:08:31.301 ' 00:08:31.301 06:19:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:31.301 06:19:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:31.301 06:19:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.301 06:19:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.301 ************************************ 00:08:31.301 START TEST skip_rpc 00:08:31.301 ************************************ 00:08:31.301 06:19:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:08:31.301 06:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2472222 00:08:31.301 06:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.301 06:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:31.301 06:19:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:31.301 [2024-11-20 06:19:51.073305] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:31.301 [2024-11-20 06:19:51.073369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472222 ] 00:08:31.301 [2024-11-20 06:19:51.184407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.561 [2024-11-20 06:19:51.236344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2472222 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2472222 ']' 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2472222 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2472222 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2472222' 00:08:36.848 killing process with pid 2472222 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2472222 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2472222 00:08:36.848 00:08:36.848 real 0m5.261s 00:08:36.848 user 0m4.991s 00:08:36.848 sys 0m0.307s 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.848 06:19:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.848 ************************************ 00:08:36.848 END TEST skip_rpc 00:08:36.848 ************************************ 00:08:36.848 06:19:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:36.848 06:19:56 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:36.848 06:19:56 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.848 06:19:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.848 ************************************ 00:08:36.848 START TEST skip_rpc_with_json 00:08:36.848 ************************************ 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2473263 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2473263 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2473263 ']' 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.848 06:19:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:36.849 [2024-11-20 06:19:56.407412] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:36.849 [2024-11-20 06:19:56.407460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473263 ] 00:08:36.849 [2024-11-20 06:19:56.491377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.849 [2024-11-20 06:19:56.521662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.421 [2024-11-20 06:19:57.214220] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:37.421 request: 00:08:37.421 { 00:08:37.421 "trtype": "tcp", 00:08:37.421 "method": "nvmf_get_transports", 00:08:37.421 "req_id": 1 00:08:37.421 } 00:08:37.421 Got JSON-RPC error response 00:08:37.421 response: 00:08:37.421 { 00:08:37.421 "code": -19, 00:08:37.421 "message": "No such device" 00:08:37.421 } 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.421 [2024-11-20 06:19:57.226316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.421 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:37.683 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.684 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:37.684 { 00:08:37.684 "subsystems": [ 00:08:37.684 { 00:08:37.684 "subsystem": "fsdev", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "fsdev_set_opts", 00:08:37.684 "params": { 00:08:37.684 "fsdev_io_pool_size": 65535, 00:08:37.684 "fsdev_io_cache_size": 256 00:08:37.684 } 00:08:37.684 } 00:08:37.684 ] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "vfio_user_target", 00:08:37.684 "config": null 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "keyring", 00:08:37.684 "config": [] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "iobuf", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "iobuf_set_options", 00:08:37.684 "params": { 00:08:37.684 "small_pool_count": 8192, 00:08:37.684 "large_pool_count": 1024, 00:08:37.684 "small_bufsize": 8192, 00:08:37.684 "large_bufsize": 135168, 00:08:37.684 "enable_numa": false 00:08:37.684 } 00:08:37.684 } 00:08:37.684 ] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "sock", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "sock_set_default_impl", 00:08:37.684 "params": { 00:08:37.684 "impl_name": "posix" 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "sock_impl_set_options", 00:08:37.684 "params": { 00:08:37.684 "impl_name": "ssl", 00:08:37.684 "recv_buf_size": 4096, 00:08:37.684 "send_buf_size": 4096, 00:08:37.684 "enable_recv_pipe": true, 00:08:37.684 "enable_quickack": false, 00:08:37.684 "enable_placement_id": 0, 00:08:37.684 "enable_zerocopy_send_server": true, 00:08:37.684 "enable_zerocopy_send_client": false, 00:08:37.684 "zerocopy_threshold": 0, 00:08:37.684 "tls_version": 0, 00:08:37.684 "enable_ktls": false 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "sock_impl_set_options", 00:08:37.684 "params": { 00:08:37.684 "impl_name": "posix", 00:08:37.684 "recv_buf_size": 2097152, 00:08:37.684 "send_buf_size": 2097152, 00:08:37.684 "enable_recv_pipe": true, 00:08:37.684 "enable_quickack": false, 00:08:37.684 "enable_placement_id": 0, 00:08:37.684 "enable_zerocopy_send_server": true, 00:08:37.684 "enable_zerocopy_send_client": false, 00:08:37.684 "zerocopy_threshold": 0, 00:08:37.684 "tls_version": 0, 00:08:37.684 "enable_ktls": false 00:08:37.684 } 00:08:37.684 } 00:08:37.684 ] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "vmd", 00:08:37.684 "config": [] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "accel", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "accel_set_options", 00:08:37.684 "params": { 00:08:37.684 "small_cache_size": 128, 00:08:37.684 "large_cache_size": 16, 00:08:37.684 "task_count": 2048, 00:08:37.684 "sequence_count": 2048, 00:08:37.684 "buf_count": 2048 00:08:37.684 } 00:08:37.684 } 00:08:37.684 ] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "bdev", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "bdev_set_options", 00:08:37.684 "params": { 00:08:37.684 "bdev_io_pool_size": 65535, 00:08:37.684 "bdev_io_cache_size": 256, 00:08:37.684 "bdev_auto_examine": true, 00:08:37.684 "iobuf_small_cache_size": 128, 00:08:37.684 "iobuf_large_cache_size": 16 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "bdev_raid_set_options", 00:08:37.684 "params": { 00:08:37.684 "process_window_size_kb": 1024, 00:08:37.684 "process_max_bandwidth_mb_sec": 0 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "bdev_iscsi_set_options", 00:08:37.684 "params": { 00:08:37.684 "timeout_sec": 30 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "bdev_nvme_set_options", 00:08:37.684 "params": { 00:08:37.684 "action_on_timeout": "none", 00:08:37.684 "timeout_us": 0, 00:08:37.684 "timeout_admin_us": 0, 00:08:37.684 "keep_alive_timeout_ms": 10000, 00:08:37.684 "arbitration_burst": 0, 00:08:37.684 "low_priority_weight": 0, 00:08:37.684 "medium_priority_weight": 0, 00:08:37.684 "high_priority_weight": 0, 00:08:37.684 "nvme_adminq_poll_period_us": 10000, 00:08:37.684 "nvme_ioq_poll_period_us": 0, 00:08:37.684 "io_queue_requests": 0, 00:08:37.684 "delay_cmd_submit": true, 00:08:37.684 "transport_retry_count": 4, 00:08:37.684 "bdev_retry_count": 3, 00:08:37.684 "transport_ack_timeout": 0, 00:08:37.684 "ctrlr_loss_timeout_sec": 0, 00:08:37.684 "reconnect_delay_sec": 0, 00:08:37.684 "fast_io_fail_timeout_sec": 0, 00:08:37.684 "disable_auto_failback": false, 00:08:37.684 "generate_uuids": false, 00:08:37.684 "transport_tos": 0, 00:08:37.684 "nvme_error_stat": false, 00:08:37.684 "rdma_srq_size": 0, 00:08:37.684 "io_path_stat": false, 00:08:37.684 "allow_accel_sequence": false, 00:08:37.684 "rdma_max_cq_size": 0, 00:08:37.684 "rdma_cm_event_timeout_ms": 0, 00:08:37.684 "dhchap_digests": [ 00:08:37.684 "sha256", 00:08:37.684 "sha384", 00:08:37.684 "sha512" 00:08:37.684 ], 00:08:37.684 "dhchap_dhgroups": [ 00:08:37.684 "null", 00:08:37.684 "ffdhe2048", 00:08:37.684 "ffdhe3072", 00:08:37.684 "ffdhe4096", 00:08:37.684 "ffdhe6144", 00:08:37.684 "ffdhe8192" 00:08:37.684 ] 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "bdev_nvme_set_hotplug", 00:08:37.684 "params": { 00:08:37.684 "period_us": 100000, 00:08:37.684 "enable": false 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "bdev_wait_for_examine" 00:08:37.684 } 00:08:37.684 ] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "scsi", 00:08:37.684 "config": null 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "scheduler", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "framework_set_scheduler", 00:08:37.684 "params": { 00:08:37.684 "name": "static" 00:08:37.684 } 00:08:37.684 } 00:08:37.684 ] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "vhost_scsi", 00:08:37.684 "config": [] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "vhost_blk", 00:08:37.684 "config": [] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "ublk", 00:08:37.684 "config": [] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "nbd", 00:08:37.684 "config": [] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "nvmf", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "nvmf_set_config", 00:08:37.684 "params": { 00:08:37.684 "discovery_filter": "match_any", 00:08:37.684 "admin_cmd_passthru": { 00:08:37.684 "identify_ctrlr": false 00:08:37.684 }, 00:08:37.684 "dhchap_digests": [ 00:08:37.684 "sha256", 00:08:37.684 "sha384", 00:08:37.684 "sha512" 00:08:37.684 ], 00:08:37.684 "dhchap_dhgroups": [ 00:08:37.684 "null", 00:08:37.684 "ffdhe2048", 00:08:37.684 "ffdhe3072", 00:08:37.684 "ffdhe4096", 00:08:37.684 "ffdhe6144", 00:08:37.684 "ffdhe8192" 00:08:37.684 ] 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "nvmf_set_max_subsystems", 00:08:37.684 "params": { 00:08:37.684 "max_subsystems": 1024 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "nvmf_set_crdt", 00:08:37.684 "params": { 00:08:37.684 "crdt1": 0, 00:08:37.684 "crdt2": 0, 00:08:37.684 "crdt3": 0 00:08:37.684 } 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "method": "nvmf_create_transport", 00:08:37.684 "params": { 00:08:37.684 "trtype": "TCP", 00:08:37.684 "max_queue_depth": 128, 00:08:37.684 "max_io_qpairs_per_ctrlr": 127, 00:08:37.684 "in_capsule_data_size": 4096, 00:08:37.684 "max_io_size": 131072, 00:08:37.684 "io_unit_size": 131072, 00:08:37.684 "max_aq_depth": 128, 00:08:37.684 "num_shared_buffers": 511, 00:08:37.684 "buf_cache_size": 4294967295, 00:08:37.684 "dif_insert_or_strip": false, 00:08:37.684 "zcopy": false, 00:08:37.684 "c2h_success": true, 00:08:37.684 "sock_priority": 0, 00:08:37.684 "abort_timeout_sec": 1, 00:08:37.684 "ack_timeout": 0, 00:08:37.684 "data_wr_pool_size": 0 00:08:37.684 } 00:08:37.684 } 00:08:37.684 ] 00:08:37.684 }, 00:08:37.684 { 00:08:37.684 "subsystem": "iscsi", 00:08:37.684 "config": [ 00:08:37.684 { 00:08:37.684 "method": "iscsi_set_options", 00:08:37.684 "params": { 00:08:37.684 "node_base": "iqn.2016-06.io.spdk", 00:08:37.684 "max_sessions": 128, 00:08:37.684 "max_connections_per_session": 2, 00:08:37.684 "max_queue_depth": 64, 00:08:37.684 "default_time2wait": 2, 00:08:37.684 "default_time2retain": 20, 00:08:37.685 "first_burst_length": 8192, 00:08:37.685 "immediate_data": true, 00:08:37.685 "allow_duplicated_isid": false, 00:08:37.685 "error_recovery_level": 0, 00:08:37.685 "nop_timeout": 60, 00:08:37.685 "nop_in_interval": 30, 00:08:37.685 "disable_chap": false, 00:08:37.685 "require_chap": false, 00:08:37.685 "mutual_chap": false, 00:08:37.685 "chap_group": 0, 00:08:37.685 "max_large_datain_per_connection": 64, 00:08:37.685 "max_r2t_per_connection": 4, 00:08:37.685 "pdu_pool_size": 36864, 00:08:37.685 "immediate_data_pool_size": 16384, 00:08:37.685 "data_out_pool_size": 2048 00:08:37.685 } 00:08:37.685 } 00:08:37.685 ] 00:08:37.685 } 00:08:37.685 ] 00:08:37.685 } 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2473263 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2473263 ']' 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2473263 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2473263 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2473263' 00:08:37.685 killing process with pid 2473263 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2473263 00:08:37.685 06:19:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2473263 00:08:37.945 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2473603 00:08:37.945 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:37.945 06:19:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2473603 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2473603 ']' 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2473603 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2473603 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2473603' 00:08:43.234 killing process with pid 2473603 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2473603 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2473603 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:43.234 00:08:43.234 real 0m6.561s 00:08:43.234 user 0m6.475s 00:08:43.234 sys 0m0.570s 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:43.234 ************************************ 00:08:43.234 END TEST skip_rpc_with_json 00:08:43.234 ************************************ 00:08:43.234 06:20:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:43.234 06:20:02 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:43.234 06:20:02 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.234 06:20:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.234 ************************************ 00:08:43.234 START TEST skip_rpc_with_delay 00:08:43.234 ************************************ 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:43.234 06:20:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:43.234 [2024-11-20 06:20:03.053908] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:43.234 06:20:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:43.234 06:20:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.234 06:20:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:43.234 06:20:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.234 00:08:43.234 real 0m0.079s 00:08:43.234 user 0m0.045s 00:08:43.234 sys 0m0.033s 00:08:43.234 06:20:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.234 06:20:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:43.234 ************************************ 00:08:43.234 END TEST skip_rpc_with_delay 00:08:43.234 ************************************ 00:08:43.234 06:20:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:43.234 06:20:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:43.234 06:20:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:43.234 06:20:03 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:43.234 06:20:03 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.234 06:20:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.494 ************************************ 00:08:43.494 START TEST exit_on_failed_rpc_init 00:08:43.494 ************************************ 00:08:43.494 06:20:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:08:43.494 06:20:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2474692 00:08:43.494 06:20:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2474692 00:08:43.494 06:20:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:43.494 06:20:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2474692 ']' 00:08:43.494 06:20:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.494 06:20:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:43.495 06:20:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.495 06:20:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:43.495 06:20:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:43.495 [2024-11-20 06:20:03.211725] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:43.495 [2024-11-20 06:20:03.211781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474692 ] 00:08:43.495 [2024-11-20 06:20:03.297092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.495 [2024-11-20 06:20:03.328239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:44.435 [2024-11-20 06:20:04.072238] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:44.435 [2024-11-20 06:20:04.072289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474876 ] 00:08:44.435 [2024-11-20 06:20:04.160839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.435 [2024-11-20 06:20:04.197097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.435 [2024-11-20 06:20:04.197151] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:44.435 [2024-11-20 06:20:04.197162] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:44.435 [2024-11-20 06:20:04.197169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2474692 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2474692 ']' 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2474692 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2474692 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2474692' 00:08:44.435 killing process with pid 2474692 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2474692 00:08:44.435 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2474692 00:08:44.696 00:08:44.696 real 0m1.334s 00:08:44.696 user 0m1.574s 00:08:44.696 sys 0m0.381s 00:08:44.696 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.696 06:20:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:44.696 ************************************ 00:08:44.696 END TEST exit_on_failed_rpc_init 00:08:44.696 ************************************ 00:08:44.696 06:20:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:44.696 00:08:44.696 real 0m13.769s 00:08:44.696 user 0m13.307s 00:08:44.696 sys 0m1.636s 00:08:44.696 06:20:04 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.696 06:20:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.696 ************************************ 00:08:44.696 END TEST skip_rpc 00:08:44.696 ************************************ 00:08:44.696 06:20:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:44.696 06:20:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.696 06:20:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.696 06:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:44.696 ************************************ 00:08:44.696 START TEST rpc_client 00:08:44.696 ************************************ 00:08:44.696 06:20:04 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:44.956 * Looking for test storage... 00:08:44.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.957 06:20:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:44.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.957 --rc genhtml_branch_coverage=1 00:08:44.957 --rc genhtml_function_coverage=1 00:08:44.957 --rc genhtml_legend=1 00:08:44.957 --rc geninfo_all_blocks=1 00:08:44.957 --rc geninfo_unexecuted_blocks=1 00:08:44.957 00:08:44.957 ' 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:44.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.957 --rc genhtml_branch_coverage=1 00:08:44.957 --rc genhtml_function_coverage=1 00:08:44.957 --rc genhtml_legend=1 00:08:44.957 --rc geninfo_all_blocks=1 00:08:44.957 --rc geninfo_unexecuted_blocks=1 00:08:44.957 00:08:44.957 ' 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:44.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.957 --rc genhtml_branch_coverage=1 00:08:44.957 --rc genhtml_function_coverage=1 00:08:44.957 --rc genhtml_legend=1 00:08:44.957 --rc geninfo_all_blocks=1 00:08:44.957 --rc geninfo_unexecuted_blocks=1 00:08:44.957 00:08:44.957 ' 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:44.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.957 --rc genhtml_branch_coverage=1 00:08:44.957 --rc genhtml_function_coverage=1 00:08:44.957 --rc genhtml_legend=1 00:08:44.957 --rc geninfo_all_blocks=1 00:08:44.957 --rc geninfo_unexecuted_blocks=1 00:08:44.957 00:08:44.957 ' 00:08:44.957 06:20:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:44.957 OK 00:08:44.957 06:20:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:44.957 00:08:44.957 real 0m0.226s 00:08:44.957 user 0m0.114s 00:08:44.957 sys 0m0.127s 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.957 06:20:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:44.957 ************************************ 00:08:44.957 END TEST rpc_client 00:08:44.957 ************************************ 00:08:44.957 06:20:04 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:44.957 06:20:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.957 06:20:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.957 06:20:04 -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 ************************************ 00:08:45.217 START TEST json_config 00:08:45.217 ************************************ 00:08:45.217 06:20:04 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:45.217 06:20:04 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.217 06:20:04 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.217 06:20:04 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.217 06:20:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.217 06:20:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.217 06:20:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.217 06:20:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.217 06:20:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.217 06:20:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.217 06:20:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.217 06:20:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:45.217 06:20:05 json_config -- scripts/common.sh@345 -- # : 1 00:08:45.217 06:20:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.217 06:20:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.217 06:20:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:45.217 06:20:05 json_config -- scripts/common.sh@353 -- # local d=1 00:08:45.217 06:20:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.217 06:20:05 json_config -- scripts/common.sh@355 -- # echo 1 00:08:45.217 06:20:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.217 06:20:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@353 -- # local d=2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.217 06:20:05 json_config -- scripts/common.sh@355 -- # echo 2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.217 06:20:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.217 06:20:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.217 06:20:05 json_config -- scripts/common.sh@368 -- # return 0 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.217 --rc genhtml_branch_coverage=1 00:08:45.217 --rc genhtml_function_coverage=1 00:08:45.217 --rc genhtml_legend=1 00:08:45.217 --rc geninfo_all_blocks=1 00:08:45.217 --rc geninfo_unexecuted_blocks=1 00:08:45.217 00:08:45.217 ' 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.217 --rc genhtml_branch_coverage=1 00:08:45.217 --rc genhtml_function_coverage=1 00:08:45.217 --rc genhtml_legend=1 00:08:45.217 --rc geninfo_all_blocks=1 00:08:45.217 --rc geninfo_unexecuted_blocks=1 00:08:45.217 00:08:45.217 ' 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.217 --rc genhtml_branch_coverage=1 00:08:45.217 --rc genhtml_function_coverage=1 00:08:45.217 --rc genhtml_legend=1 00:08:45.217 --rc geninfo_all_blocks=1 00:08:45.217 --rc geninfo_unexecuted_blocks=1 00:08:45.217 00:08:45.217 ' 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.217 --rc genhtml_branch_coverage=1 00:08:45.217 --rc genhtml_function_coverage=1 00:08:45.217 --rc genhtml_legend=1 00:08:45.217 --rc geninfo_all_blocks=1 00:08:45.217 --rc geninfo_unexecuted_blocks=1 00:08:45.217 00:08:45.217 ' 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.217 06:20:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.217 06:20:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.217 06:20:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.217 06:20:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.217 06:20:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.217 06:20:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.217 06:20:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.217 06:20:05 json_config -- paths/export.sh@5 -- # export PATH 00:08:45.217 06:20:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@51 -- # : 0 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.217 06:20:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:45.217 INFO: JSON configuration test init 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.217 06:20:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:45.217 06:20:05 json_config -- json_config/common.sh@9 -- # local app=target 00:08:45.217 06:20:05 json_config -- json_config/common.sh@10 -- # shift 00:08:45.217 06:20:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:45.217 06:20:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:45.217 06:20:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:45.217 06:20:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:45.217 06:20:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:45.217 06:20:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2475163 00:08:45.217 06:20:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:45.217 Waiting for target to run... 00:08:45.217 06:20:05 json_config -- json_config/common.sh@25 -- # waitforlisten 2475163 /var/tmp/spdk_tgt.sock 00:08:45.217 06:20:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@833 -- # '[' -z 2475163 ']' 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:45.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.217 06:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:45.478 [2024-11-20 06:20:05.177145] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:45.478 [2024-11-20 06:20:05.177204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475163 ] 00:08:45.738 [2024-11-20 06:20:05.490790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.738 [2024-11-20 06:20:05.519118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.309 06:20:05 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.309 06:20:05 json_config -- common/autotest_common.sh@866 -- # return 0 00:08:46.309 06:20:05 json_config -- json_config/common.sh@26 -- # echo '' 00:08:46.309 00:08:46.309 06:20:05 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:46.309 06:20:05 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:46.309 06:20:05 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:46.309 06:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:46.309 06:20:05 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:46.309 06:20:05 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:46.309 06:20:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.309 06:20:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:46.309 06:20:06 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:46.309 06:20:06 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:46.309 06:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:46.881 06:20:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:46.881 06:20:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:46.881 06:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@54 -- # sort 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:46.881 06:20:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:46.881 06:20:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:46.881 06:20:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:47.142 06:20:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.142 06:20:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:47.142 06:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:47.142 MallocForNvmf0 00:08:47.142 06:20:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:47.142 06:20:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:47.402 MallocForNvmf1 00:08:47.402 06:20:07 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:47.402 06:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:47.662 [2024-11-20 06:20:07.332047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.662 06:20:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.662 06:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.662 06:20:07 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:47.662 06:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:47.923 06:20:07 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:47.923 06:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:48.184 06:20:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:48.184 06:20:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:48.184 [2024-11-20 06:20:08.046219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:48.184 06:20:08 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:48.184 06:20:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.184 06:20:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:48.444 06:20:08 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:48.444 06:20:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.444 06:20:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:48.444 06:20:08 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:48.444 06:20:08 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:48.444 06:20:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:48.444 MallocBdevForConfigChangeCheck 00:08:48.444 06:20:08 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:48.444 06:20:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.444 06:20:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:48.705 06:20:08 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:48.705 06:20:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:48.966 06:20:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:48.966 INFO: shutting down applications... 00:08:48.966 06:20:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:48.966 06:20:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:48.966 06:20:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:48.966 06:20:08 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:49.226 Calling clear_iscsi_subsystem 00:08:49.226 Calling clear_nvmf_subsystem 00:08:49.226 Calling clear_nbd_subsystem 00:08:49.226 Calling clear_ublk_subsystem 00:08:49.226 Calling clear_vhost_blk_subsystem 00:08:49.226 Calling clear_vhost_scsi_subsystem 00:08:49.226 Calling clear_bdev_subsystem 00:08:49.226 06:20:09 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:49.226 06:20:09 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:49.226 06:20:09 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:49.226 06:20:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:49.226 06:20:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:49.226 06:20:09 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:49.801 06:20:09 json_config -- json_config/json_config.sh@352 -- # break 00:08:49.802 06:20:09 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:49.802 06:20:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:49.802 06:20:09 json_config -- json_config/common.sh@31 -- # local app=target 00:08:49.802 06:20:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:49.802 06:20:09 json_config -- json_config/common.sh@35 -- # [[ -n 2475163 ]] 00:08:49.802 06:20:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2475163 00:08:49.802 06:20:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:49.802 06:20:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:49.802 06:20:09 json_config -- json_config/common.sh@41 -- # kill -0 2475163 00:08:49.802 06:20:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:50.374 06:20:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:50.374 06:20:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:50.374 06:20:10 json_config -- json_config/common.sh@41 -- # kill -0 2475163 00:08:50.374 06:20:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:50.374 06:20:10 json_config -- json_config/common.sh@43 -- # break 00:08:50.374 06:20:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:50.374 06:20:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:50.374 SPDK target shutdown done 00:08:50.374 06:20:10 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:50.374 INFO: relaunching applications... 00:08:50.374 06:20:10 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:50.374 06:20:10 json_config -- json_config/common.sh@9 -- # local app=target 00:08:50.374 06:20:10 json_config -- json_config/common.sh@10 -- # shift 00:08:50.374 06:20:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:50.374 06:20:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:50.374 06:20:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:50.374 06:20:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:50.374 06:20:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:50.374 06:20:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2476294 00:08:50.374 06:20:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:50.374 Waiting for target to run... 00:08:50.374 06:20:10 json_config -- json_config/common.sh@25 -- # waitforlisten 2476294 /var/tmp/spdk_tgt.sock 00:08:50.374 06:20:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:50.374 06:20:10 json_config -- common/autotest_common.sh@833 -- # '[' -z 2476294 ']' 00:08:50.374 06:20:10 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:50.374 06:20:10 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.374 06:20:10 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:50.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:50.374 06:20:10 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.374 06:20:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:50.374 [2024-11-20 06:20:10.099656] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:50.374 [2024-11-20 06:20:10.099726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2476294 ] 00:08:50.635 [2024-11-20 06:20:10.412052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.635 [2024-11-20 06:20:10.437472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.206 [2024-11-20 06:20:10.942410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.206 [2024-11-20 06:20:10.974799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:51.206 06:20:11 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:51.206 06:20:11 json_config -- common/autotest_common.sh@866 -- # return 0 00:08:51.206 06:20:11 json_config -- json_config/common.sh@26 -- # echo '' 00:08:51.206 00:08:51.206 06:20:11 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:51.206 06:20:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:51.206 INFO: Checking if target configuration is the same... 00:08:51.206 06:20:11 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.206 06:20:11 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:51.206 06:20:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:51.206 + '[' 2 -ne 2 ']' 00:08:51.206 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:51.206 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:51.206 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:51.206 +++ basename /dev/fd/62 00:08:51.206 ++ mktemp /tmp/62.XXX 00:08:51.206 + tmp_file_1=/tmp/62.Wvy 00:08:51.206 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.206 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:51.206 + tmp_file_2=/tmp/spdk_tgt_config.json.MJj 00:08:51.206 + ret=0 00:08:51.206 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:51.467 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:51.467 + diff -u /tmp/62.Wvy /tmp/spdk_tgt_config.json.MJj 00:08:51.728 + echo 'INFO: JSON config files are the same' 00:08:51.728 INFO: JSON config files are the same 00:08:51.728 + rm /tmp/62.Wvy /tmp/spdk_tgt_config.json.MJj 00:08:51.728 + exit 0 00:08:51.728 06:20:11 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:51.728 06:20:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:51.728 INFO: changing configuration and checking if this can be detected... 00:08:51.728 06:20:11 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:51.728 06:20:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:51.728 06:20:11 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.728 06:20:11 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:51.728 06:20:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:51.728 + '[' 2 -ne 2 ']' 00:08:51.728 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:51.728 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:51.728 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:51.728 +++ basename /dev/fd/62 00:08:51.728 ++ mktemp /tmp/62.XXX 00:08:51.728 + tmp_file_1=/tmp/62.s0h 00:08:51.728 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:51.728 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:51.728 + tmp_file_2=/tmp/spdk_tgt_config.json.zXh 00:08:51.728 + ret=0 00:08:51.728 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:51.989 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:52.249 + diff -u /tmp/62.s0h /tmp/spdk_tgt_config.json.zXh 00:08:52.249 + ret=1 00:08:52.249 + echo '=== Start of file: /tmp/62.s0h ===' 00:08:52.249 + cat /tmp/62.s0h 00:08:52.249 + echo '=== End of file: /tmp/62.s0h ===' 00:08:52.249 + echo '' 00:08:52.249 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zXh ===' 00:08:52.249 + cat /tmp/spdk_tgt_config.json.zXh 00:08:52.249 + echo '=== End of file: /tmp/spdk_tgt_config.json.zXh ===' 00:08:52.249 + echo '' 00:08:52.249 + rm /tmp/62.s0h /tmp/spdk_tgt_config.json.zXh 00:08:52.249 + exit 1 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:52.249 INFO: configuration change detected. 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:52.249 06:20:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.249 06:20:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@324 -- # [[ -n 2476294 ]] 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:52.249 06:20:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.249 06:20:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:52.249 06:20:11 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:52.249 06:20:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.249 06:20:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.249 06:20:12 json_config -- json_config/json_config.sh@330 -- # killprocess 2476294 00:08:52.249 06:20:12 json_config -- common/autotest_common.sh@952 -- # '[' -z 2476294 ']' 00:08:52.249 06:20:12 json_config -- common/autotest_common.sh@956 -- # kill -0 2476294 00:08:52.249 06:20:12 json_config -- common/autotest_common.sh@957 -- # uname 00:08:52.249 06:20:12 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:52.249 06:20:12 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2476294 00:08:52.249 06:20:12 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:52.249 06:20:12 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:52.250 06:20:12 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2476294' 00:08:52.250 killing process with pid 2476294 00:08:52.250 06:20:12 json_config -- common/autotest_common.sh@971 -- # kill 2476294 00:08:52.250 06:20:12 json_config -- common/autotest_common.sh@976 -- # wait 2476294 00:08:52.510 06:20:12 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:52.510 06:20:12 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:52.510 06:20:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.510 06:20:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.510 06:20:12 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:52.510 06:20:12 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:52.510 INFO: Success 00:08:52.510 00:08:52.510 real 0m7.485s 00:08:52.510 user 0m9.104s 00:08:52.510 sys 0m1.941s 00:08:52.510 06:20:12 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:52.510 06:20:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:52.510 ************************************ 00:08:52.510 END TEST json_config 00:08:52.510 ************************************ 00:08:52.772 06:20:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:52.772 06:20:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:52.772 06:20:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.772 06:20:12 -- common/autotest_common.sh@10 -- # set +x 00:08:52.772 ************************************ 00:08:52.772 START TEST json_config_extra_key 00:08:52.772 ************************************ 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.772 06:20:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.772 --rc genhtml_branch_coverage=1 00:08:52.772 --rc genhtml_function_coverage=1 00:08:52.772 --rc genhtml_legend=1 00:08:52.772 --rc geninfo_all_blocks=1 00:08:52.772 --rc geninfo_unexecuted_blocks=1 00:08:52.772 00:08:52.772 ' 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.772 --rc genhtml_branch_coverage=1 00:08:52.772 --rc genhtml_function_coverage=1 00:08:52.772 --rc genhtml_legend=1 00:08:52.772 --rc geninfo_all_blocks=1 00:08:52.772 --rc geninfo_unexecuted_blocks=1 00:08:52.772 00:08:52.772 ' 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.772 --rc genhtml_branch_coverage=1 00:08:52.772 --rc genhtml_function_coverage=1 00:08:52.772 --rc genhtml_legend=1 00:08:52.772 --rc geninfo_all_blocks=1 00:08:52.772 --rc geninfo_unexecuted_blocks=1 00:08:52.772 00:08:52.772 ' 00:08:52.772 06:20:12 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:52.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.772 --rc genhtml_branch_coverage=1 00:08:52.772 --rc genhtml_function_coverage=1 00:08:52.772 --rc genhtml_legend=1 00:08:52.772 --rc geninfo_all_blocks=1 00:08:52.772 --rc geninfo_unexecuted_blocks=1 00:08:52.772 00:08:52.772 ' 00:08:52.772 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:52.772 06:20:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.773 06:20:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.773 06:20:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.773 06:20:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.773 06:20:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.773 06:20:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.773 06:20:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.773 06:20:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.773 06:20:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:52.773 06:20:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.773 06:20:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:52.773 INFO: launching applications... 00:08:52.773 06:20:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2477032 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:52.773 Waiting for target to run... 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2477032 /var/tmp/spdk_tgt.sock 00:08:52.773 06:20:12 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2477032 ']' 00:08:52.773 06:20:12 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:52.773 06:20:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:52.773 06:20:12 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.773 06:20:12 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:52.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:52.773 06:20:12 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.773 06:20:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:53.035 [2024-11-20 06:20:12.741942] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:53.035 [2024-11-20 06:20:12.742016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477032 ] 00:08:53.295 [2024-11-20 06:20:13.066906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.295 [2024-11-20 06:20:13.091397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.867 06:20:13 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.867 06:20:13 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:53.867 00:08:53.867 06:20:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:53.867 INFO: shutting down applications... 00:08:53.867 06:20:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2477032 ]] 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2477032 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2477032 00:08:53.867 06:20:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:54.439 06:20:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:54.439 06:20:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:54.439 06:20:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2477032 00:08:54.439 06:20:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:54.439 06:20:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:54.439 06:20:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:54.439 06:20:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:54.439 SPDK target shutdown done 00:08:54.439 06:20:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:54.439 Success 00:08:54.439 00:08:54.439 real 0m1.581s 00:08:54.439 user 0m1.162s 00:08:54.439 sys 0m0.451s 00:08:54.439 06:20:14 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.439 06:20:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:54.439 ************************************ 00:08:54.439 END TEST json_config_extra_key 00:08:54.439 ************************************ 00:08:54.439 06:20:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:54.439 06:20:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:54.439 06:20:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.439 06:20:14 -- common/autotest_common.sh@10 -- # set +x 00:08:54.439 ************************************ 00:08:54.439 START TEST alias_rpc 00:08:54.439 ************************************ 00:08:54.439 06:20:14 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:54.439 * Looking for test storage... 00:08:54.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:54.439 06:20:14 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:54.439 06:20:14 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.440 06:20:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.440 --rc genhtml_branch_coverage=1 00:08:54.440 --rc genhtml_function_coverage=1 00:08:54.440 --rc genhtml_legend=1 00:08:54.440 --rc geninfo_all_blocks=1 00:08:54.440 --rc geninfo_unexecuted_blocks=1 00:08:54.440 00:08:54.440 ' 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.440 --rc genhtml_branch_coverage=1 00:08:54.440 --rc genhtml_function_coverage=1 00:08:54.440 --rc genhtml_legend=1 00:08:54.440 --rc geninfo_all_blocks=1 00:08:54.440 --rc geninfo_unexecuted_blocks=1 00:08:54.440 00:08:54.440 ' 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.440 --rc genhtml_branch_coverage=1 00:08:54.440 --rc genhtml_function_coverage=1 00:08:54.440 --rc genhtml_legend=1 00:08:54.440 --rc geninfo_all_blocks=1 00:08:54.440 --rc geninfo_unexecuted_blocks=1 00:08:54.440 00:08:54.440 ' 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:54.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.440 --rc genhtml_branch_coverage=1 00:08:54.440 --rc genhtml_function_coverage=1 00:08:54.440 --rc genhtml_legend=1 00:08:54.440 --rc geninfo_all_blocks=1 00:08:54.440 --rc geninfo_unexecuted_blocks=1 00:08:54.440 00:08:54.440 ' 00:08:54.440 06:20:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:54.440 06:20:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2477378 00:08:54.440 06:20:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2477378 00:08:54.440 06:20:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2477378 ']' 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.440 06:20:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.701 [2024-11-20 06:20:14.392252] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:54.701 [2024-11-20 06:20:14.392328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477378 ] 00:08:54.701 [2024-11-20 06:20:14.479306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.701 [2024-11-20 06:20:14.514680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:55.643 06:20:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:55.643 06:20:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2477378 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2477378 ']' 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2477378 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2477378 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2477378' 00:08:55.643 killing process with pid 2477378 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@971 -- # kill 2477378 00:08:55.643 06:20:15 alias_rpc -- common/autotest_common.sh@976 -- # wait 2477378 00:08:55.904 00:08:55.904 real 0m1.516s 00:08:55.904 user 0m1.686s 00:08:55.904 sys 0m0.416s 00:08:55.904 06:20:15 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.904 06:20:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.904 ************************************ 00:08:55.904 END TEST alias_rpc 00:08:55.904 ************************************ 00:08:55.904 06:20:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:55.904 06:20:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:55.904 06:20:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:55.904 06:20:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.904 06:20:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.904 ************************************ 00:08:55.904 START TEST spdkcli_tcp 00:08:55.904 ************************************ 00:08:55.904 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:55.904 * Looking for test storage... 00:08:56.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.165 06:20:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.165 --rc genhtml_branch_coverage=1 00:08:56.165 --rc genhtml_function_coverage=1 00:08:56.165 --rc genhtml_legend=1 00:08:56.165 --rc geninfo_all_blocks=1 00:08:56.165 --rc geninfo_unexecuted_blocks=1 00:08:56.165 00:08:56.165 ' 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.165 --rc genhtml_branch_coverage=1 00:08:56.165 --rc genhtml_function_coverage=1 00:08:56.165 --rc genhtml_legend=1 00:08:56.165 --rc geninfo_all_blocks=1 00:08:56.165 --rc geninfo_unexecuted_blocks=1 00:08:56.165 00:08:56.165 ' 00:08:56.165 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.165 --rc genhtml_branch_coverage=1 00:08:56.165 --rc genhtml_function_coverage=1 00:08:56.165 --rc genhtml_legend=1 00:08:56.166 --rc geninfo_all_blocks=1 00:08:56.166 --rc geninfo_unexecuted_blocks=1 00:08:56.166 00:08:56.166 ' 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:56.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.166 --rc genhtml_branch_coverage=1 00:08:56.166 --rc genhtml_function_coverage=1 00:08:56.166 --rc genhtml_legend=1 00:08:56.166 --rc geninfo_all_blocks=1 00:08:56.166 --rc geninfo_unexecuted_blocks=1 00:08:56.166 00:08:56.166 ' 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2477711 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2477711 00:08:56.166 06:20:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2477711 ']' 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.166 06:20:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.166 [2024-11-20 06:20:15.993706] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:56.166 [2024-11-20 06:20:15.993795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2477711 ] 00:08:56.166 [2024-11-20 06:20:16.081405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:56.425 [2024-11-20 06:20:16.117786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.425 [2024-11-20 06:20:16.117818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.995 06:20:16 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.995 06:20:16 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:08:56.995 06:20:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2477896 00:08:56.995 06:20:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:56.995 06:20:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:57.257 [ 00:08:57.257 "bdev_malloc_delete", 00:08:57.257 "bdev_malloc_create", 00:08:57.257 "bdev_null_resize", 00:08:57.257 "bdev_null_delete", 00:08:57.257 "bdev_null_create", 00:08:57.257 "bdev_nvme_cuse_unregister", 00:08:57.257 "bdev_nvme_cuse_register", 00:08:57.257 "bdev_opal_new_user", 00:08:57.257 "bdev_opal_set_lock_state", 00:08:57.257 "bdev_opal_delete", 00:08:57.257 "bdev_opal_get_info", 00:08:57.257 "bdev_opal_create", 00:08:57.257 "bdev_nvme_opal_revert", 00:08:57.257 "bdev_nvme_opal_init", 00:08:57.257 "bdev_nvme_send_cmd", 00:08:57.257 "bdev_nvme_set_keys", 00:08:57.257 "bdev_nvme_get_path_iostat", 00:08:57.257 "bdev_nvme_get_mdns_discovery_info", 00:08:57.257 "bdev_nvme_stop_mdns_discovery", 00:08:57.257 "bdev_nvme_start_mdns_discovery", 00:08:57.257 "bdev_nvme_set_multipath_policy", 00:08:57.257 "bdev_nvme_set_preferred_path", 00:08:57.257 "bdev_nvme_get_io_paths", 00:08:57.257 "bdev_nvme_remove_error_injection", 00:08:57.257 "bdev_nvme_add_error_injection", 00:08:57.257 "bdev_nvme_get_discovery_info", 00:08:57.257 "bdev_nvme_stop_discovery", 00:08:57.257 "bdev_nvme_start_discovery", 00:08:57.257 "bdev_nvme_get_controller_health_info", 00:08:57.257 "bdev_nvme_disable_controller", 00:08:57.257 "bdev_nvme_enable_controller", 00:08:57.257 "bdev_nvme_reset_controller", 00:08:57.257 "bdev_nvme_get_transport_statistics", 00:08:57.257 "bdev_nvme_apply_firmware", 00:08:57.257 "bdev_nvme_detach_controller", 00:08:57.257 "bdev_nvme_get_controllers", 00:08:57.257 "bdev_nvme_attach_controller", 00:08:57.257 "bdev_nvme_set_hotplug", 00:08:57.257 "bdev_nvme_set_options", 00:08:57.257 "bdev_passthru_delete", 00:08:57.257 "bdev_passthru_create", 00:08:57.257 "bdev_lvol_set_parent_bdev", 00:08:57.257 "bdev_lvol_set_parent", 00:08:57.257 "bdev_lvol_check_shallow_copy", 00:08:57.257 "bdev_lvol_start_shallow_copy", 00:08:57.257 "bdev_lvol_grow_lvstore", 00:08:57.257 "bdev_lvol_get_lvols", 00:08:57.257 "bdev_lvol_get_lvstores", 00:08:57.257 "bdev_lvol_delete", 00:08:57.257 "bdev_lvol_set_read_only", 00:08:57.257 "bdev_lvol_resize", 00:08:57.257 "bdev_lvol_decouple_parent", 00:08:57.257 "bdev_lvol_inflate", 00:08:57.257 "bdev_lvol_rename", 00:08:57.257 "bdev_lvol_clone_bdev", 00:08:57.257 "bdev_lvol_clone", 00:08:57.257 "bdev_lvol_snapshot", 00:08:57.257 "bdev_lvol_create", 00:08:57.257 "bdev_lvol_delete_lvstore", 00:08:57.257 "bdev_lvol_rename_lvstore", 00:08:57.257 "bdev_lvol_create_lvstore", 00:08:57.257 "bdev_raid_set_options", 00:08:57.257 "bdev_raid_remove_base_bdev", 00:08:57.257 "bdev_raid_add_base_bdev", 00:08:57.257 "bdev_raid_delete", 00:08:57.257 "bdev_raid_create", 00:08:57.257 "bdev_raid_get_bdevs", 00:08:57.257 "bdev_error_inject_error", 00:08:57.257 "bdev_error_delete", 00:08:57.257 "bdev_error_create", 00:08:57.257 "bdev_split_delete", 00:08:57.257 "bdev_split_create", 00:08:57.257 "bdev_delay_delete", 00:08:57.257 "bdev_delay_create", 00:08:57.257 "bdev_delay_update_latency", 00:08:57.257 "bdev_zone_block_delete", 00:08:57.257 "bdev_zone_block_create", 00:08:57.257 "blobfs_create", 00:08:57.257 "blobfs_detect", 00:08:57.257 "blobfs_set_cache_size", 00:08:57.257 "bdev_aio_delete", 00:08:57.257 "bdev_aio_rescan", 00:08:57.257 "bdev_aio_create", 00:08:57.257 "bdev_ftl_set_property", 00:08:57.257 "bdev_ftl_get_properties", 00:08:57.257 "bdev_ftl_get_stats", 00:08:57.257 "bdev_ftl_unmap", 00:08:57.257 "bdev_ftl_unload", 00:08:57.257 "bdev_ftl_delete", 00:08:57.257 "bdev_ftl_load", 00:08:57.257 "bdev_ftl_create", 00:08:57.257 "bdev_virtio_attach_controller", 00:08:57.257 "bdev_virtio_scsi_get_devices", 00:08:57.257 "bdev_virtio_detach_controller", 00:08:57.257 "bdev_virtio_blk_set_hotplug", 00:08:57.257 "bdev_iscsi_delete", 00:08:57.257 "bdev_iscsi_create", 00:08:57.257 "bdev_iscsi_set_options", 00:08:57.257 "accel_error_inject_error", 00:08:57.257 "ioat_scan_accel_module", 00:08:57.257 "dsa_scan_accel_module", 00:08:57.257 "iaa_scan_accel_module", 00:08:57.257 "vfu_virtio_create_fs_endpoint", 00:08:57.257 "vfu_virtio_create_scsi_endpoint", 00:08:57.257 "vfu_virtio_scsi_remove_target", 00:08:57.257 "vfu_virtio_scsi_add_target", 00:08:57.257 "vfu_virtio_create_blk_endpoint", 00:08:57.257 "vfu_virtio_delete_endpoint", 00:08:57.257 "keyring_file_remove_key", 00:08:57.257 "keyring_file_add_key", 00:08:57.257 "keyring_linux_set_options", 00:08:57.257 "fsdev_aio_delete", 00:08:57.257 "fsdev_aio_create", 00:08:57.257 "iscsi_get_histogram", 00:08:57.257 "iscsi_enable_histogram", 00:08:57.257 "iscsi_set_options", 00:08:57.257 "iscsi_get_auth_groups", 00:08:57.257 "iscsi_auth_group_remove_secret", 00:08:57.257 "iscsi_auth_group_add_secret", 00:08:57.257 "iscsi_delete_auth_group", 00:08:57.257 "iscsi_create_auth_group", 00:08:57.257 "iscsi_set_discovery_auth", 00:08:57.257 "iscsi_get_options", 00:08:57.257 "iscsi_target_node_request_logout", 00:08:57.257 "iscsi_target_node_set_redirect", 00:08:57.257 "iscsi_target_node_set_auth", 00:08:57.257 "iscsi_target_node_add_lun", 00:08:57.257 "iscsi_get_stats", 00:08:57.257 "iscsi_get_connections", 00:08:57.257 "iscsi_portal_group_set_auth", 00:08:57.257 "iscsi_start_portal_group", 00:08:57.257 "iscsi_delete_portal_group", 00:08:57.257 "iscsi_create_portal_group", 00:08:57.257 "iscsi_get_portal_groups", 00:08:57.257 "iscsi_delete_target_node", 00:08:57.257 "iscsi_target_node_remove_pg_ig_maps", 00:08:57.257 "iscsi_target_node_add_pg_ig_maps", 00:08:57.257 "iscsi_create_target_node", 00:08:57.257 "iscsi_get_target_nodes", 00:08:57.257 "iscsi_delete_initiator_group", 00:08:57.257 "iscsi_initiator_group_remove_initiators", 00:08:57.257 "iscsi_initiator_group_add_initiators", 00:08:57.257 "iscsi_create_initiator_group", 00:08:57.257 "iscsi_get_initiator_groups", 00:08:57.257 "nvmf_set_crdt", 00:08:57.257 "nvmf_set_config", 00:08:57.257 "nvmf_set_max_subsystems", 00:08:57.257 "nvmf_stop_mdns_prr", 00:08:57.257 "nvmf_publish_mdns_prr", 00:08:57.257 "nvmf_subsystem_get_listeners", 00:08:57.257 "nvmf_subsystem_get_qpairs", 00:08:57.257 "nvmf_subsystem_get_controllers", 00:08:57.257 "nvmf_get_stats", 00:08:57.257 "nvmf_get_transports", 00:08:57.257 "nvmf_create_transport", 00:08:57.257 "nvmf_get_targets", 00:08:57.257 "nvmf_delete_target", 00:08:57.257 "nvmf_create_target", 00:08:57.257 "nvmf_subsystem_allow_any_host", 00:08:57.257 "nvmf_subsystem_set_keys", 00:08:57.257 "nvmf_subsystem_remove_host", 00:08:57.257 "nvmf_subsystem_add_host", 00:08:57.257 "nvmf_ns_remove_host", 00:08:57.257 "nvmf_ns_add_host", 00:08:57.257 "nvmf_subsystem_remove_ns", 00:08:57.257 "nvmf_subsystem_set_ns_ana_group", 00:08:57.257 "nvmf_subsystem_add_ns", 00:08:57.257 "nvmf_subsystem_listener_set_ana_state", 00:08:57.257 "nvmf_discovery_get_referrals", 00:08:57.257 "nvmf_discovery_remove_referral", 00:08:57.257 "nvmf_discovery_add_referral", 00:08:57.257 "nvmf_subsystem_remove_listener", 00:08:57.257 "nvmf_subsystem_add_listener", 00:08:57.257 "nvmf_delete_subsystem", 00:08:57.257 "nvmf_create_subsystem", 00:08:57.257 "nvmf_get_subsystems", 00:08:57.257 "env_dpdk_get_mem_stats", 00:08:57.257 "nbd_get_disks", 00:08:57.257 "nbd_stop_disk", 00:08:57.258 "nbd_start_disk", 00:08:57.258 "ublk_recover_disk", 00:08:57.258 "ublk_get_disks", 00:08:57.258 "ublk_stop_disk", 00:08:57.258 "ublk_start_disk", 00:08:57.258 "ublk_destroy_target", 00:08:57.258 "ublk_create_target", 00:08:57.258 "virtio_blk_create_transport", 00:08:57.258 "virtio_blk_get_transports", 00:08:57.258 "vhost_controller_set_coalescing", 00:08:57.258 "vhost_get_controllers", 00:08:57.258 "vhost_delete_controller", 00:08:57.258 "vhost_create_blk_controller", 00:08:57.258 "vhost_scsi_controller_remove_target", 00:08:57.258 "vhost_scsi_controller_add_target", 00:08:57.258 "vhost_start_scsi_controller", 00:08:57.258 "vhost_create_scsi_controller", 00:08:57.258 "thread_set_cpumask", 00:08:57.258 "scheduler_set_options", 00:08:57.258 "framework_get_governor", 00:08:57.258 "framework_get_scheduler", 00:08:57.258 "framework_set_scheduler", 00:08:57.258 "framework_get_reactors", 00:08:57.258 "thread_get_io_channels", 00:08:57.258 "thread_get_pollers", 00:08:57.258 "thread_get_stats", 00:08:57.258 "framework_monitor_context_switch", 00:08:57.258 "spdk_kill_instance", 00:08:57.258 "log_enable_timestamps", 00:08:57.258 "log_get_flags", 00:08:57.258 "log_clear_flag", 00:08:57.258 "log_set_flag", 00:08:57.258 "log_get_level", 00:08:57.258 "log_set_level", 00:08:57.258 "log_get_print_level", 00:08:57.258 "log_set_print_level", 00:08:57.258 "framework_enable_cpumask_locks", 00:08:57.258 "framework_disable_cpumask_locks", 00:08:57.258 "framework_wait_init", 00:08:57.258 "framework_start_init", 00:08:57.258 "scsi_get_devices", 00:08:57.258 "bdev_get_histogram", 00:08:57.258 "bdev_enable_histogram", 00:08:57.258 "bdev_set_qos_limit", 00:08:57.258 "bdev_set_qd_sampling_period", 00:08:57.258 "bdev_get_bdevs", 00:08:57.258 "bdev_reset_iostat", 00:08:57.258 "bdev_get_iostat", 00:08:57.258 "bdev_examine", 00:08:57.258 "bdev_wait_for_examine", 00:08:57.258 "bdev_set_options", 00:08:57.258 "accel_get_stats", 00:08:57.258 "accel_set_options", 00:08:57.258 "accel_set_driver", 00:08:57.258 "accel_crypto_key_destroy", 00:08:57.258 "accel_crypto_keys_get", 00:08:57.258 "accel_crypto_key_create", 00:08:57.258 "accel_assign_opc", 00:08:57.258 "accel_get_module_info", 00:08:57.258 "accel_get_opc_assignments", 00:08:57.258 "vmd_rescan", 00:08:57.258 "vmd_remove_device", 00:08:57.258 "vmd_enable", 00:08:57.258 "sock_get_default_impl", 00:08:57.258 "sock_set_default_impl", 00:08:57.258 "sock_impl_set_options", 00:08:57.258 "sock_impl_get_options", 00:08:57.258 "iobuf_get_stats", 00:08:57.258 "iobuf_set_options", 00:08:57.258 "keyring_get_keys", 00:08:57.258 "vfu_tgt_set_base_path", 00:08:57.258 "framework_get_pci_devices", 00:08:57.258 "framework_get_config", 00:08:57.258 "framework_get_subsystems", 00:08:57.258 "fsdev_set_opts", 00:08:57.258 "fsdev_get_opts", 00:08:57.258 "trace_get_info", 00:08:57.258 "trace_get_tpoint_group_mask", 00:08:57.258 "trace_disable_tpoint_group", 00:08:57.258 "trace_enable_tpoint_group", 00:08:57.258 "trace_clear_tpoint_mask", 00:08:57.258 "trace_set_tpoint_mask", 00:08:57.258 "notify_get_notifications", 00:08:57.258 "notify_get_types", 00:08:57.258 "spdk_get_version", 00:08:57.258 "rpc_get_methods" 00:08:57.258 ] 00:08:57.258 06:20:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:57.258 06:20:16 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.258 06:20:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.258 06:20:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:57.258 06:20:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2477711 00:08:57.258 06:20:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2477711 ']' 00:08:57.258 06:20:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2477711 00:08:57.258 06:20:16 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:08:57.258 06:20:16 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.258 06:20:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2477711 00:08:57.258 06:20:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.258 06:20:17 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.258 06:20:17 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2477711' 00:08:57.258 killing process with pid 2477711 00:08:57.258 06:20:17 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2477711 00:08:57.258 06:20:17 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2477711 00:08:57.519 00:08:57.519 real 0m1.498s 00:08:57.519 user 0m2.692s 00:08:57.519 sys 0m0.461s 00:08:57.519 06:20:17 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.519 06:20:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.519 ************************************ 00:08:57.519 END TEST spdkcli_tcp 00:08:57.519 ************************************ 00:08:57.519 06:20:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:57.519 06:20:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:57.519 06:20:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:57.519 06:20:17 -- common/autotest_common.sh@10 -- # set +x 00:08:57.519 ************************************ 00:08:57.519 START TEST dpdk_mem_utility 00:08:57.519 ************************************ 00:08:57.519 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:57.519 * Looking for test storage... 00:08:57.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:57.519 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:57.519 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:08:57.519 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:57.780 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.780 06:20:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:57.780 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.780 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:57.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.780 --rc genhtml_branch_coverage=1 00:08:57.780 --rc genhtml_function_coverage=1 00:08:57.780 --rc genhtml_legend=1 00:08:57.780 --rc geninfo_all_blocks=1 00:08:57.780 --rc geninfo_unexecuted_blocks=1 00:08:57.780 00:08:57.780 ' 00:08:57.780 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:57.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.781 --rc genhtml_branch_coverage=1 00:08:57.781 --rc genhtml_function_coverage=1 00:08:57.781 --rc genhtml_legend=1 00:08:57.781 --rc geninfo_all_blocks=1 00:08:57.781 --rc geninfo_unexecuted_blocks=1 00:08:57.781 00:08:57.781 ' 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.781 --rc genhtml_branch_coverage=1 00:08:57.781 --rc genhtml_function_coverage=1 00:08:57.781 --rc genhtml_legend=1 00:08:57.781 --rc geninfo_all_blocks=1 00:08:57.781 --rc geninfo_unexecuted_blocks=1 00:08:57.781 00:08:57.781 ' 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:57.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.781 --rc genhtml_branch_coverage=1 00:08:57.781 --rc genhtml_function_coverage=1 00:08:57.781 --rc genhtml_legend=1 00:08:57.781 --rc geninfo_all_blocks=1 00:08:57.781 --rc geninfo_unexecuted_blocks=1 00:08:57.781 00:08:57.781 ' 00:08:57.781 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:57.781 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2478044 00:08:57.781 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2478044 00:08:57.781 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2478044 ']' 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.781 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:57.781 [2024-11-20 06:20:17.555101] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:57.781 [2024-11-20 06:20:17.555164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478044 ] 00:08:57.781 [2024-11-20 06:20:17.608687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.781 [2024-11-20 06:20:17.641095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.042 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:58.042 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:08:58.042 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:58.042 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:58.042 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.042 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:58.042 { 00:08:58.042 "filename": "/tmp/spdk_mem_dump.txt" 00:08:58.042 } 00:08:58.042 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.042 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:58.042 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:58.042 1 heaps totaling size 818.000000 MiB 00:08:58.042 size: 818.000000 MiB heap id: 0 00:08:58.042 end heaps---------- 00:08:58.042 9 mempools totaling size 603.782043 MiB 00:08:58.042 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:58.042 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:58.042 size: 100.555481 MiB name: bdev_io_2478044 00:08:58.042 size: 50.003479 MiB name: msgpool_2478044 00:08:58.042 size: 36.509338 MiB name: fsdev_io_2478044 00:08:58.042 size: 21.763794 MiB name: PDU_Pool 00:08:58.042 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:58.042 size: 4.133484 MiB name: evtpool_2478044 00:08:58.042 size: 0.026123 MiB name: Session_Pool 00:08:58.042 end mempools------- 00:08:58.042 6 memzones totaling size 4.142822 MiB 00:08:58.042 size: 1.000366 MiB name: RG_ring_0_2478044 00:08:58.042 size: 1.000366 MiB name: RG_ring_1_2478044 00:08:58.042 size: 1.000366 MiB name: RG_ring_4_2478044 00:08:58.042 size: 1.000366 MiB name: RG_ring_5_2478044 00:08:58.042 size: 0.125366 MiB name: RG_ring_2_2478044 00:08:58.042 size: 0.015991 MiB name: RG_ring_3_2478044 00:08:58.042 end memzones------- 00:08:58.042 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:58.042 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:58.042 list of free elements. size: 10.852478 MiB 00:08:58.042 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:58.042 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:58.042 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:58.042 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:58.042 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:58.042 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:58.042 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:58.042 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:58.042 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:08:58.042 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:58.042 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:58.042 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:58.042 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:58.042 element at address: 0x200028200000 with size: 0.410034 MiB 00:08:58.042 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:58.042 list of standard malloc elements. size: 199.218628 MiB 00:08:58.042 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:58.042 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:58.042 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:58.042 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:58.042 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:58.043 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:58.043 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:58.043 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:58.043 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:58.043 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:58.043 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200028268f80 with size: 0.000183 MiB 00:08:58.043 element at address: 0x200028269040 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:58.043 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:58.043 list of memzone associated elements. size: 607.928894 MiB 00:08:58.043 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:58.043 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:58.043 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:58.043 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:58.043 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:58.043 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2478044_0 00:08:58.043 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:58.043 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2478044_0 00:08:58.043 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:58.043 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2478044_0 00:08:58.043 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:58.043 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:58.043 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:58.043 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:58.043 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:58.043 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2478044_0 00:08:58.043 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:58.043 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2478044 00:08:58.043 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:58.043 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2478044 00:08:58.043 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:58.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:58.043 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:58.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:58.043 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:58.043 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:58.043 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:58.043 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:58.043 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:58.043 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2478044 00:08:58.043 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:58.043 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2478044 00:08:58.043 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:58.043 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2478044 00:08:58.043 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:58.043 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2478044 00:08:58.043 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:58.043 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2478044 00:08:58.043 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:58.043 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2478044 00:08:58.043 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:58.043 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:58.043 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:58.043 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:58.043 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:58.043 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:58.043 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:58.043 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2478044 00:08:58.043 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:58.043 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2478044 00:08:58.043 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:58.043 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:58.043 element at address: 0x200028269100 with size: 0.023743 MiB 00:08:58.043 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:58.043 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:58.043 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2478044 00:08:58.043 element at address: 0x20002826f240 with size: 0.002441 MiB 00:08:58.043 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:58.043 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:58.043 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2478044 00:08:58.043 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:58.043 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2478044 00:08:58.043 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:58.043 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2478044 00:08:58.043 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:08:58.044 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:58.044 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:58.044 06:20:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2478044 00:08:58.044 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2478044 ']' 00:08:58.044 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2478044 00:08:58.044 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:08:58.044 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:58.044 06:20:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2478044 00:08:58.304 06:20:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:58.304 06:20:18 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:58.304 06:20:18 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2478044' 00:08:58.304 killing process with pid 2478044 00:08:58.304 06:20:18 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2478044 00:08:58.304 06:20:18 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2478044 00:08:58.304 00:08:58.304 real 0m0.892s 00:08:58.304 user 0m0.857s 00:08:58.304 sys 0m0.380s 00:08:58.304 06:20:18 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.304 06:20:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:58.304 ************************************ 00:08:58.304 END TEST dpdk_mem_utility 00:08:58.304 ************************************ 00:08:58.565 06:20:18 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:58.565 06:20:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:58.565 06:20:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.565 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 ************************************ 00:08:58.565 START TEST event 00:08:58.565 ************************************ 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:58.565 * Looking for test storage... 00:08:58.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1691 -- # lcov --version 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:58.565 06:20:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.565 06:20:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.565 06:20:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.565 06:20:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.565 06:20:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.565 06:20:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.565 06:20:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.565 06:20:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.565 06:20:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.565 06:20:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.565 06:20:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.565 06:20:18 event -- scripts/common.sh@344 -- # case "$op" in 00:08:58.565 06:20:18 event -- scripts/common.sh@345 -- # : 1 00:08:58.565 06:20:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.565 06:20:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.565 06:20:18 event -- scripts/common.sh@365 -- # decimal 1 00:08:58.565 06:20:18 event -- scripts/common.sh@353 -- # local d=1 00:08:58.565 06:20:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.565 06:20:18 event -- scripts/common.sh@355 -- # echo 1 00:08:58.565 06:20:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.565 06:20:18 event -- scripts/common.sh@366 -- # decimal 2 00:08:58.565 06:20:18 event -- scripts/common.sh@353 -- # local d=2 00:08:58.565 06:20:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.565 06:20:18 event -- scripts/common.sh@355 -- # echo 2 00:08:58.565 06:20:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.565 06:20:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.565 06:20:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.565 06:20:18 event -- scripts/common.sh@368 -- # return 0 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.565 --rc genhtml_branch_coverage=1 00:08:58.565 --rc genhtml_function_coverage=1 00:08:58.565 --rc genhtml_legend=1 00:08:58.565 --rc geninfo_all_blocks=1 00:08:58.565 --rc geninfo_unexecuted_blocks=1 00:08:58.565 00:08:58.565 ' 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.565 --rc genhtml_branch_coverage=1 00:08:58.565 --rc genhtml_function_coverage=1 00:08:58.565 --rc genhtml_legend=1 00:08:58.565 --rc geninfo_all_blocks=1 00:08:58.565 --rc geninfo_unexecuted_blocks=1 00:08:58.565 00:08:58.565 ' 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.565 --rc genhtml_branch_coverage=1 00:08:58.565 --rc genhtml_function_coverage=1 00:08:58.565 --rc genhtml_legend=1 00:08:58.565 --rc geninfo_all_blocks=1 00:08:58.565 --rc geninfo_unexecuted_blocks=1 00:08:58.565 00:08:58.565 ' 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:58.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.565 --rc genhtml_branch_coverage=1 00:08:58.565 --rc genhtml_function_coverage=1 00:08:58.565 --rc genhtml_legend=1 00:08:58.565 --rc geninfo_all_blocks=1 00:08:58.565 --rc geninfo_unexecuted_blocks=1 00:08:58.565 00:08:58.565 ' 00:08:58.565 06:20:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:58.565 06:20:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:58.565 06:20:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:58.565 06:20:18 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.565 06:20:18 event -- common/autotest_common.sh@10 -- # set +x 00:08:58.826 ************************************ 00:08:58.826 START TEST event_perf 00:08:58.826 ************************************ 00:08:58.826 06:20:18 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:58.826 Running I/O for 1 seconds...[2024-11-20 06:20:18.526468] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:08:58.826 [2024-11-20 06:20:18.526581] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478370 ] 00:08:58.826 [2024-11-20 06:20:18.618311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.826 [2024-11-20 06:20:18.661526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.826 [2024-11-20 06:20:18.661680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.826 [2024-11-20 06:20:18.661834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.826 Running I/O for 1 seconds...[2024-11-20 06:20:18.661834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.775 00:08:59.775 lcore 0: 181849 00:08:59.775 lcore 1: 181852 00:08:59.775 lcore 2: 181854 00:08:59.775 lcore 3: 181852 00:08:59.775 done. 00:08:59.775 00:08:59.775 real 0m1.184s 00:08:59.775 user 0m4.092s 00:08:59.775 sys 0m0.087s 00:08:59.775 06:20:19 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.775 06:20:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.775 ************************************ 00:08:59.775 END TEST event_perf 00:08:59.775 ************************************ 00:09:00.036 06:20:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:00.036 06:20:19 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:00.036 06:20:19 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.036 06:20:19 event -- common/autotest_common.sh@10 -- # set +x 00:09:00.036 ************************************ 00:09:00.036 START TEST event_reactor 00:09:00.036 ************************************ 00:09:00.036 06:20:19 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:00.036 [2024-11-20 06:20:19.782212] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:00.036 [2024-11-20 06:20:19.782315] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478721 ] 00:09:00.036 [2024-11-20 06:20:19.868036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.036 [2024-11-20 06:20:19.899691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.421 test_start 00:09:01.421 oneshot 00:09:01.421 tick 100 00:09:01.421 tick 100 00:09:01.421 tick 250 00:09:01.421 tick 100 00:09:01.421 tick 100 00:09:01.421 tick 100 00:09:01.421 tick 250 00:09:01.421 tick 500 00:09:01.421 tick 100 00:09:01.421 tick 100 00:09:01.421 tick 250 00:09:01.421 tick 100 00:09:01.421 tick 100 00:09:01.421 test_end 00:09:01.421 00:09:01.421 real 0m1.165s 00:09:01.421 user 0m1.087s 00:09:01.421 sys 0m0.074s 00:09:01.421 06:20:20 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.421 06:20:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:01.421 ************************************ 00:09:01.421 END TEST event_reactor 00:09:01.421 ************************************ 00:09:01.421 06:20:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:01.421 06:20:20 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:01.421 06:20:20 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.421 06:20:20 event -- common/autotest_common.sh@10 -- # set +x 00:09:01.421 ************************************ 00:09:01.421 START TEST event_reactor_perf 00:09:01.421 ************************************ 00:09:01.421 06:20:21 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:01.421 [2024-11-20 06:20:21.026883] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:01.422 [2024-11-20 06:20:21.026981] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2478871 ] 00:09:01.422 [2024-11-20 06:20:21.118297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.422 [2024-11-20 06:20:21.155803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.364 test_start 00:09:02.364 test_end 00:09:02.364 Performance: 538826 events per second 00:09:02.364 00:09:02.364 real 0m1.176s 00:09:02.364 user 0m1.087s 00:09:02.364 sys 0m0.086s 00:09:02.364 06:20:22 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.364 06:20:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:02.364 ************************************ 00:09:02.364 END TEST event_reactor_perf 00:09:02.364 ************************************ 00:09:02.364 06:20:22 event -- event/event.sh@49 -- # uname -s 00:09:02.364 06:20:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:02.364 06:20:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:02.364 06:20:22 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:02.364 06:20:22 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.364 06:20:22 event -- common/autotest_common.sh@10 -- # set +x 00:09:02.364 ************************************ 00:09:02.364 START TEST event_scheduler 00:09:02.364 ************************************ 00:09:02.364 06:20:22 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:02.625 * Looking for test storage... 00:09:02.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.625 06:20:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.625 --rc genhtml_branch_coverage=1 00:09:02.625 --rc genhtml_function_coverage=1 00:09:02.625 --rc genhtml_legend=1 00:09:02.625 --rc geninfo_all_blocks=1 00:09:02.625 --rc geninfo_unexecuted_blocks=1 00:09:02.625 00:09:02.625 ' 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.625 --rc genhtml_branch_coverage=1 00:09:02.625 --rc genhtml_function_coverage=1 00:09:02.625 --rc genhtml_legend=1 00:09:02.625 --rc geninfo_all_blocks=1 00:09:02.625 --rc geninfo_unexecuted_blocks=1 00:09:02.625 00:09:02.625 ' 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.625 --rc genhtml_branch_coverage=1 00:09:02.625 --rc genhtml_function_coverage=1 00:09:02.625 --rc genhtml_legend=1 00:09:02.625 --rc geninfo_all_blocks=1 00:09:02.625 --rc geninfo_unexecuted_blocks=1 00:09:02.625 00:09:02.625 ' 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.625 --rc genhtml_branch_coverage=1 00:09:02.625 --rc genhtml_function_coverage=1 00:09:02.625 --rc genhtml_legend=1 00:09:02.625 --rc geninfo_all_blocks=1 00:09:02.625 --rc geninfo_unexecuted_blocks=1 00:09:02.625 00:09:02.625 ' 00:09:02.625 06:20:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:02.625 06:20:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2479157 00:09:02.625 06:20:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.625 06:20:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2479157 00:09:02.625 06:20:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2479157 ']' 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:02.625 06:20:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:02.625 [2024-11-20 06:20:22.517759] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:02.625 [2024-11-20 06:20:22.517834] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479157 ] 00:09:02.886 [2024-11-20 06:20:22.611024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.886 [2024-11-20 06:20:22.667206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.886 [2024-11-20 06:20:22.667369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.886 [2024-11-20 06:20:22.667526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.886 [2024-11-20 06:20:22.667527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.458 06:20:23 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:03.458 06:20:23 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:09:03.458 06:20:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:03.458 06:20:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.458 06:20:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.458 [2024-11-20 06:20:23.341902] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:03.458 [2024-11-20 06:20:23.341921] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:03.458 [2024-11-20 06:20:23.341931] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:03.458 [2024-11-20 06:20:23.341937] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:03.458 [2024-11-20 06:20:23.341943] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:03.458 06:20:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.458 06:20:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:03.458 06:20:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.458 06:20:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 [2024-11-20 06:20:23.409846] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:03.719 06:20:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:03.719 06:20:23 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.719 06:20:23 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 ************************************ 00:09:03.719 START TEST scheduler_create_thread 00:09:03.719 ************************************ 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 2 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 3 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 4 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 5 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 6 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 7 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 8 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:03.719 9 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.719 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:04.291 10 00:09:04.291 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.291 06:20:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:04.291 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.291 06:20:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:05.676 06:20:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.676 06:20:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:05.676 06:20:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:05.676 06:20:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.676 06:20:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:06.246 06:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.246 06:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:06.246 06:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.246 06:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:07.186 06:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.186 06:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:07.186 06:20:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:07.186 06:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.186 06:20:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:07.757 06:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.757 00:09:07.757 real 0m4.225s 00:09:07.757 user 0m0.027s 00:09:07.757 sys 0m0.005s 00:09:07.757 06:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.757 06:20:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:07.757 ************************************ 00:09:07.757 END TEST scheduler_create_thread 00:09:07.757 ************************************ 00:09:08.018 06:20:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:08.018 06:20:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2479157 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2479157 ']' 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2479157 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2479157 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2479157' 00:09:08.018 killing process with pid 2479157 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2479157 00:09:08.018 06:20:27 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2479157 00:09:08.278 [2024-11-20 06:20:28.051686] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:08.540 00:09:08.540 real 0m5.950s 00:09:08.540 user 0m13.867s 00:09:08.540 sys 0m0.429s 00:09:08.540 06:20:28 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.540 06:20:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 ************************************ 00:09:08.540 END TEST event_scheduler 00:09:08.540 ************************************ 00:09:08.540 06:20:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:08.540 06:20:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:08.540 06:20:28 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:08.540 06:20:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.540 06:20:28 event -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 ************************************ 00:09:08.540 START TEST app_repeat 00:09:08.540 ************************************ 00:09:08.540 06:20:28 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2480529 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2480529' 00:09:08.540 Process app_repeat pid: 2480529 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:08.540 spdk_app_start Round 0 00:09:08.540 06:20:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2480529 /var/tmp/spdk-nbd.sock 00:09:08.540 06:20:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2480529 ']' 00:09:08.540 06:20:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.540 06:20:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.540 06:20:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.540 06:20:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.540 06:20:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 [2024-11-20 06:20:28.337853] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:08.540 [2024-11-20 06:20:28.337931] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2480529 ] 00:09:08.540 [2024-11-20 06:20:28.436180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.801 [2024-11-20 06:20:28.470436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.801 [2024-11-20 06:20:28.470437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.370 06:20:29 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:09.370 06:20:29 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:09.370 06:20:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:09.631 Malloc0 00:09:09.631 06:20:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:09.631 Malloc1 00:09:09.631 06:20:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.631 06:20:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:09.891 /dev/nbd0 00:09:09.891 06:20:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:09.891 06:20:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:09.891 1+0 records in 00:09:09.891 1+0 records out 00:09:09.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027179 s, 15.1 MB/s 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:09.891 06:20:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:09.891 06:20:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:09.891 06:20:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:09.891 06:20:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:10.218 /dev/nbd1 00:09:10.218 06:20:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:10.218 06:20:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:10.218 1+0 records in 00:09:10.218 1+0 records out 00:09:10.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279546 s, 14.7 MB/s 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:10.218 06:20:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:10.218 06:20:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:10.218 06:20:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.218 06:20:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:10.218 06:20:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.218 06:20:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:10.479 { 00:09:10.479 "nbd_device": "/dev/nbd0", 00:09:10.479 "bdev_name": "Malloc0" 00:09:10.479 }, 00:09:10.479 { 00:09:10.479 "nbd_device": "/dev/nbd1", 00:09:10.479 "bdev_name": "Malloc1" 00:09:10.479 } 00:09:10.479 ]' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:10.479 { 00:09:10.479 "nbd_device": "/dev/nbd0", 00:09:10.479 "bdev_name": "Malloc0" 00:09:10.479 }, 00:09:10.479 { 00:09:10.479 "nbd_device": "/dev/nbd1", 00:09:10.479 "bdev_name": "Malloc1" 00:09:10.479 } 00:09:10.479 ]' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:10.479 /dev/nbd1' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:10.479 /dev/nbd1' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:10.479 256+0 records in 00:09:10.479 256+0 records out 00:09:10.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127305 s, 82.4 MB/s 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:10.479 256+0 records in 00:09:10.479 256+0 records out 00:09:10.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120003 s, 87.4 MB/s 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:10.479 256+0 records in 00:09:10.479 256+0 records out 00:09:10.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123213 s, 85.1 MB/s 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:10.479 06:20:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.480 06:20:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.741 06:20:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:11.001 06:20:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:11.001 06:20:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:11.263 06:20:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:11.263 [2024-11-20 06:20:31.173518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.523 [2024-11-20 06:20:31.203777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.523 [2024-11-20 06:20:31.203806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.523 [2024-11-20 06:20:31.233110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:11.524 [2024-11-20 06:20:31.233141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:14.896 06:20:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:14.896 06:20:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:14.896 spdk_app_start Round 1 00:09:14.896 06:20:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2480529 /var/tmp/spdk-nbd.sock 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2480529 ']' 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:14.896 06:20:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:14.896 06:20:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.896 Malloc0 00:09:14.896 06:20:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:14.896 Malloc1 00:09:14.896 06:20:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:14.896 06:20:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:15.191 /dev/nbd0 00:09:15.191 06:20:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:15.191 06:20:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.191 1+0 records in 00:09:15.191 1+0 records out 00:09:15.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234143 s, 17.5 MB/s 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:15.191 06:20:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:15.191 06:20:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.191 06:20:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.191 06:20:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:15.191 /dev/nbd1 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.452 1+0 records in 00:09:15.452 1+0 records out 00:09:15.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276603 s, 14.8 MB/s 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:15.452 06:20:35 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.452 { 00:09:15.452 "nbd_device": "/dev/nbd0", 00:09:15.452 "bdev_name": "Malloc0" 00:09:15.452 }, 00:09:15.452 { 00:09:15.452 "nbd_device": "/dev/nbd1", 00:09:15.452 "bdev_name": "Malloc1" 00:09:15.452 } 00:09:15.452 ]' 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.452 { 00:09:15.452 "nbd_device": "/dev/nbd0", 00:09:15.452 "bdev_name": "Malloc0" 00:09:15.452 }, 00:09:15.452 { 00:09:15.452 "nbd_device": "/dev/nbd1", 00:09:15.452 "bdev_name": "Malloc1" 00:09:15.452 } 00:09:15.452 ]' 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.452 /dev/nbd1' 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.452 /dev/nbd1' 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.452 06:20:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:15.712 256+0 records in 00:09:15.712 256+0 records out 00:09:15.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127629 s, 82.2 MB/s 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.712 256+0 records in 00:09:15.712 256+0 records out 00:09:15.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118276 s, 88.7 MB/s 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:15.712 256+0 records in 00:09:15.712 256+0 records out 00:09:15.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128268 s, 81.7 MB/s 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.712 06:20:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.713 06:20:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.713 06:20:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.713 06:20:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.713 06:20:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.713 06:20:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.974 06:20:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.234 06:20:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:16.235 06:20:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:16.235 06:20:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:16.235 06:20:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:16.235 06:20:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:16.496 06:20:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:16.496 [2024-11-20 06:20:36.319175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.496 [2024-11-20 06:20:36.349867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.496 [2024-11-20 06:20:36.350011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.496 [2024-11-20 06:20:36.379911] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:16.496 [2024-11-20 06:20:36.379942] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:19.795 06:20:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:19.795 06:20:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:19.795 spdk_app_start Round 2 00:09:19.795 06:20:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2480529 /var/tmp/spdk-nbd.sock 00:09:19.795 06:20:39 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2480529 ']' 00:09:19.796 06:20:39 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:19.796 06:20:39 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:19.796 06:20:39 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:19.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:19.796 06:20:39 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:19.796 06:20:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:19.796 06:20:39 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.796 06:20:39 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:19.796 06:20:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:19.796 Malloc0 00:09:19.796 06:20:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:20.057 Malloc1 00:09:20.057 06:20:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.057 06:20:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:20.325 /dev/nbd0 00:09:20.325 06:20:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:20.325 06:20:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.326 1+0 records in 00:09:20.326 1+0 records out 00:09:20.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027608 s, 14.8 MB/s 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:20.326 06:20:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.326 06:20:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.326 06:20:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:20.326 /dev/nbd1 00:09:20.326 06:20:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:20.326 06:20:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:20.326 06:20:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.587 1+0 records in 00:09:20.587 1+0 records out 00:09:20.587 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282079 s, 14.5 MB/s 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:20.587 06:20:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:20.587 { 00:09:20.587 "nbd_device": "/dev/nbd0", 00:09:20.587 "bdev_name": "Malloc0" 00:09:20.587 }, 00:09:20.587 { 00:09:20.587 "nbd_device": "/dev/nbd1", 00:09:20.587 "bdev_name": "Malloc1" 00:09:20.587 } 00:09:20.587 ]' 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:20.587 { 00:09:20.587 "nbd_device": "/dev/nbd0", 00:09:20.587 "bdev_name": "Malloc0" 00:09:20.587 }, 00:09:20.587 { 00:09:20.587 "nbd_device": "/dev/nbd1", 00:09:20.587 "bdev_name": "Malloc1" 00:09:20.587 } 00:09:20.587 ]' 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:20.587 /dev/nbd1' 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:20.587 /dev/nbd1' 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:20.587 06:20:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:20.849 256+0 records in 00:09:20.849 256+0 records out 00:09:20.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127431 s, 82.3 MB/s 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:20.849 256+0 records in 00:09:20.849 256+0 records out 00:09:20.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120739 s, 86.8 MB/s 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:20.849 256+0 records in 00:09:20.849 256+0 records out 00:09:20.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130153 s, 80.6 MB/s 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.849 06:20:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.110 06:20:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:21.371 06:20:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:21.371 06:20:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:21.633 06:20:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:21.633 [2024-11-20 06:20:41.450302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.633 [2024-11-20 06:20:41.480751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.633 [2024-11-20 06:20:41.480763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.633 [2024-11-20 06:20:41.510152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:21.633 [2024-11-20 06:20:41.510186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:24.936 06:20:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2480529 /var/tmp/spdk-nbd.sock 00:09:24.936 06:20:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2480529 ']' 00:09:24.936 06:20:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:24.936 06:20:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:24.936 06:20:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:24.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:24.936 06:20:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:24.936 06:20:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:24.937 06:20:44 event.app_repeat -- event/event.sh@39 -- # killprocess 2480529 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2480529 ']' 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2480529 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2480529 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2480529' 00:09:24.937 killing process with pid 2480529 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2480529 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2480529 00:09:24.937 spdk_app_start is called in Round 0. 00:09:24.937 Shutdown signal received, stop current app iteration 00:09:24.937 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:09:24.937 spdk_app_start is called in Round 1. 00:09:24.937 Shutdown signal received, stop current app iteration 00:09:24.937 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:09:24.937 spdk_app_start is called in Round 2. 00:09:24.937 Shutdown signal received, stop current app iteration 00:09:24.937 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:09:24.937 spdk_app_start is called in Round 3. 00:09:24.937 Shutdown signal received, stop current app iteration 00:09:24.937 06:20:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:24.937 06:20:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:24.937 00:09:24.937 real 0m16.409s 00:09:24.937 user 0m35.958s 00:09:24.937 sys 0m2.353s 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.937 06:20:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:24.937 ************************************ 00:09:24.937 END TEST app_repeat 00:09:24.937 ************************************ 00:09:24.937 06:20:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:24.937 06:20:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:24.937 06:20:44 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:24.937 06:20:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.937 06:20:44 event -- common/autotest_common.sh@10 -- # set +x 00:09:24.937 ************************************ 00:09:24.937 START TEST cpu_locks 00:09:24.937 ************************************ 00:09:24.937 06:20:44 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:25.199 * Looking for test storage... 00:09:25.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.199 06:20:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.199 --rc genhtml_branch_coverage=1 00:09:25.199 --rc genhtml_function_coverage=1 00:09:25.199 --rc genhtml_legend=1 00:09:25.199 --rc geninfo_all_blocks=1 00:09:25.199 --rc geninfo_unexecuted_blocks=1 00:09:25.199 00:09:25.199 ' 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.199 --rc genhtml_branch_coverage=1 00:09:25.199 --rc genhtml_function_coverage=1 00:09:25.199 --rc genhtml_legend=1 00:09:25.199 --rc geninfo_all_blocks=1 00:09:25.199 --rc geninfo_unexecuted_blocks=1 00:09:25.199 00:09:25.199 ' 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.199 --rc genhtml_branch_coverage=1 00:09:25.199 --rc genhtml_function_coverage=1 00:09:25.199 --rc genhtml_legend=1 00:09:25.199 --rc geninfo_all_blocks=1 00:09:25.199 --rc geninfo_unexecuted_blocks=1 00:09:25.199 00:09:25.199 ' 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.199 --rc genhtml_branch_coverage=1 00:09:25.199 --rc genhtml_function_coverage=1 00:09:25.199 --rc genhtml_legend=1 00:09:25.199 --rc geninfo_all_blocks=1 00:09:25.199 --rc geninfo_unexecuted_blocks=1 00:09:25.199 00:09:25.199 ' 00:09:25.199 06:20:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:25.199 06:20:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:25.199 06:20:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:25.199 06:20:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.199 06:20:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 ************************************ 00:09:25.199 START TEST default_locks 00:09:25.199 ************************************ 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2484054 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2484054 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2484054 ']' 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:25.199 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.199 [2024-11-20 06:20:45.074720] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:25.199 [2024-11-20 06:20:45.074792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484054 ] 00:09:25.460 [2024-11-20 06:20:45.162932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.460 [2024-11-20 06:20:45.203024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.032 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.032 06:20:45 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:09:26.032 06:20:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2484054 00:09:26.032 06:20:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2484054 00:09:26.032 06:20:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:26.604 lslocks: write error 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2484054 ']' 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2484054' 00:09:26.604 killing process with pid 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2484054 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2484054 ']' 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2484054) - No such process 00:09:26.604 ERROR: process (pid: 2484054) is no longer running 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:26.604 06:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:26.605 06:20:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:26.605 00:09:26.605 real 0m1.505s 00:09:26.605 user 0m1.601s 00:09:26.605 sys 0m0.554s 00:09:26.605 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.605 06:20:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.605 ************************************ 00:09:26.605 END TEST default_locks 00:09:26.605 ************************************ 00:09:26.866 06:20:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:26.866 06:20:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:26.866 06:20:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.866 06:20:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.866 ************************************ 00:09:26.866 START TEST default_locks_via_rpc 00:09:26.866 ************************************ 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2484353 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2484353 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2484353 ']' 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.866 06:20:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.866 [2024-11-20 06:20:46.654262] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:26.866 [2024-11-20 06:20:46.654321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484353 ] 00:09:26.866 [2024-11-20 06:20:46.739328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.866 [2024-11-20 06:20:46.772527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2484353 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2484353 00:09:27.808 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:28.070 06:20:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2484353 00:09:28.070 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2484353 ']' 00:09:28.070 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2484353 00:09:28.070 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:09:28.070 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:28.070 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2484353 00:09:28.333 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:28.333 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:28.333 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2484353' 00:09:28.333 killing process with pid 2484353 00:09:28.333 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2484353 00:09:28.333 06:20:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2484353 00:09:28.333 00:09:28.333 real 0m1.582s 00:09:28.333 user 0m1.697s 00:09:28.333 sys 0m0.555s 00:09:28.333 06:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.333 06:20:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.333 ************************************ 00:09:28.333 END TEST default_locks_via_rpc 00:09:28.333 ************************************ 00:09:28.333 06:20:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:28.333 06:20:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:28.333 06:20:48 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.333 06:20:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:28.593 ************************************ 00:09:28.593 START TEST non_locking_app_on_locked_coremask 00:09:28.593 ************************************ 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2484706 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2484706 /var/tmp/spdk.sock 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2484706 ']' 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:28.593 06:20:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.594 [2024-11-20 06:20:48.313382] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:28.594 [2024-11-20 06:20:48.313441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484706 ] 00:09:28.594 [2024-11-20 06:20:48.403504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.594 [2024-11-20 06:20:48.443298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2484880 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2484880 /var/tmp/spdk2.sock 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2484880 ']' 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:29.533 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.533 [2024-11-20 06:20:49.175063] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:29.533 [2024-11-20 06:20:49.175116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484880 ] 00:09:29.533 [2024-11-20 06:20:49.262964] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:29.533 [2024-11-20 06:20:49.262993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.533 [2024-11-20 06:20:49.325476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.104 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:30.104 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:30.104 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2484706 00:09:30.104 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2484706 00:09:30.104 06:20:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:31.043 lslocks: write error 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2484706 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2484706 ']' 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2484706 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2484706 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2484706' 00:09:31.043 killing process with pid 2484706 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2484706 00:09:31.043 06:20:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2484706 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2484880 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2484880 ']' 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2484880 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2484880 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2484880' 00:09:31.305 killing process with pid 2484880 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2484880 00:09:31.305 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2484880 00:09:31.566 00:09:31.566 real 0m3.028s 00:09:31.566 user 0m3.357s 00:09:31.566 sys 0m0.951s 00:09:31.566 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.566 06:20:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.566 ************************************ 00:09:31.566 END TEST non_locking_app_on_locked_coremask 00:09:31.566 ************************************ 00:09:31.566 06:20:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:31.566 06:20:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:31.566 06:20:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.566 06:20:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:31.567 ************************************ 00:09:31.567 START TEST locking_app_on_unlocked_coremask 00:09:31.567 ************************************ 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2485272 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2485272 /var/tmp/spdk.sock 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2485272 ']' 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.567 06:20:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.567 [2024-11-20 06:20:51.422821] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:31.567 [2024-11-20 06:20:51.422884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485272 ] 00:09:31.827 [2024-11-20 06:20:51.508580] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:31.827 [2024-11-20 06:20:51.508606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.827 [2024-11-20 06:20:51.541848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2485589 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2485589 /var/tmp/spdk2.sock 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2485589 ']' 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:32.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:32.398 06:20:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.398 [2024-11-20 06:20:52.264118] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:32.398 [2024-11-20 06:20:52.264173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485589 ] 00:09:32.658 [2024-11-20 06:20:52.352908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.658 [2024-11-20 06:20:52.411027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.229 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.229 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:33.229 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2485589 00:09:33.229 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:33.229 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2485589 00:09:34.170 lslocks: write error 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2485272 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2485272 ']' 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2485272 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2485272 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2485272' 00:09:34.170 killing process with pid 2485272 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2485272 00:09:34.170 06:20:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2485272 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2485589 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2485589 ']' 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2485589 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2485589 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2485589' 00:09:34.431 killing process with pid 2485589 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2485589 00:09:34.431 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2485589 00:09:34.692 00:09:34.692 real 0m3.045s 00:09:34.692 user 0m3.388s 00:09:34.692 sys 0m0.933s 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.692 ************************************ 00:09:34.692 END TEST locking_app_on_unlocked_coremask 00:09:34.692 ************************************ 00:09:34.692 06:20:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:34.692 06:20:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:34.692 06:20:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.692 06:20:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:34.692 ************************************ 00:09:34.692 START TEST locking_app_on_locked_coremask 00:09:34.692 ************************************ 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2485967 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2485967 /var/tmp/spdk.sock 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2485967 ']' 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:34.692 06:20:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.692 [2024-11-20 06:20:54.541791] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:34.692 [2024-11-20 06:20:54.541848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485967 ] 00:09:34.951 [2024-11-20 06:20:54.626933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.951 [2024-11-20 06:20:54.659224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2486252 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2486252 /var/tmp/spdk2.sock 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2486252 /var/tmp/spdk2.sock 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2486252 /var/tmp/spdk2.sock 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2486252 ']' 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:35.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:35.521 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:35.521 [2024-11-20 06:20:55.361965] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:35.521 [2024-11-20 06:20:55.362016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486252 ] 00:09:35.781 [2024-11-20 06:20:55.449287] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2485967 has claimed it. 00:09:35.781 [2024-11-20 06:20:55.449321] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:36.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2486252) - No such process 00:09:36.350 ERROR: process (pid: 2486252) is no longer running 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2485967 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2485967 00:09:36.350 06:20:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:36.611 lslocks: write error 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2485967 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2485967 ']' 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2485967 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2485967 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2485967' 00:09:36.611 killing process with pid 2485967 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2485967 00:09:36.611 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2485967 00:09:36.872 00:09:36.872 real 0m2.172s 00:09:36.872 user 0m2.440s 00:09:36.872 sys 0m0.611s 00:09:36.872 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:36.872 06:20:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 ************************************ 00:09:36.872 END TEST locking_app_on_locked_coremask 00:09:36.872 ************************************ 00:09:36.872 06:20:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:36.872 06:20:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:36.872 06:20:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:36.872 06:20:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.872 ************************************ 00:09:36.872 START TEST locking_overlapped_coremask 00:09:36.872 ************************************ 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2486482 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2486482 /var/tmp/spdk.sock 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2486482 ']' 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:36.872 06:20:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.133 [2024-11-20 06:20:56.794313] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:37.133 [2024-11-20 06:20:56.794376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486482 ] 00:09:37.133 [2024-11-20 06:20:56.881735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:37.133 [2024-11-20 06:20:56.919053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.133 [2024-11-20 06:20:56.919203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.133 [2024-11-20 06:20:56.919204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2486682 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2486682 /var/tmp/spdk2.sock 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2486682 /var/tmp/spdk2.sock 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.703 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2486682 /var/tmp/spdk2.sock 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2486682 ']' 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:37.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:37.704 06:20:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:37.964 [2024-11-20 06:20:57.654348] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:37.964 [2024-11-20 06:20:57.654400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486682 ] 00:09:37.964 [2024-11-20 06:20:57.767463] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2486482 has claimed it. 00:09:37.964 [2024-11-20 06:20:57.767502] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:38.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2486682) - No such process 00:09:38.534 ERROR: process (pid: 2486682) is no longer running 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2486482 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2486482 ']' 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2486482 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2486482 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2486482' 00:09:38.534 killing process with pid 2486482 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2486482 00:09:38.534 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2486482 00:09:38.796 00:09:38.796 real 0m1.788s 00:09:38.796 user 0m5.186s 00:09:38.796 sys 0m0.381s 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 ************************************ 00:09:38.796 END TEST locking_overlapped_coremask 00:09:38.796 ************************************ 00:09:38.796 06:20:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:38.796 06:20:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:38.796 06:20:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.796 06:20:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 ************************************ 00:09:38.796 START TEST locking_overlapped_coremask_via_rpc 00:09:38.796 ************************************ 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2486952 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2486952 /var/tmp/spdk.sock 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2486952 ']' 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:38.796 06:20:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 [2024-11-20 06:20:58.660431] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:38.796 [2024-11-20 06:20:58.660490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486952 ] 00:09:39.056 [2024-11-20 06:20:58.744911] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:39.056 [2024-11-20 06:20:58.744936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.056 [2024-11-20 06:20:58.780154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.056 [2024-11-20 06:20:58.780304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.056 [2024-11-20 06:20:58.780306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.625 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:39.625 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:39.625 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2487054 00:09:39.625 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2487054 /var/tmp/spdk2.sock 00:09:39.626 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2487054 ']' 00:09:39.626 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:39.626 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:39.626 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:39.626 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:39.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:39.626 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:39.626 06:20:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.626 [2024-11-20 06:20:59.514055] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:39.626 [2024-11-20 06:20:59.514109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487054 ] 00:09:39.886 [2024-11-20 06:20:59.625587] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:39.886 [2024-11-20 06:20:59.625617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:39.886 [2024-11-20 06:20:59.703277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.886 [2024-11-20 06:20:59.706867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.886 [2024-11-20 06:20:59.706869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.458 [2024-11-20 06:21:00.314834] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2486952 has claimed it. 00:09:40.458 request: 00:09:40.458 { 00:09:40.458 "method": "framework_enable_cpumask_locks", 00:09:40.458 "req_id": 1 00:09:40.458 } 00:09:40.458 Got JSON-RPC error response 00:09:40.458 response: 00:09:40.458 { 00:09:40.458 "code": -32603, 00:09:40.458 "message": "Failed to claim CPU core: 2" 00:09:40.458 } 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2486952 /var/tmp/spdk.sock 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2486952 ']' 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.458 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2487054 /var/tmp/spdk2.sock 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2487054 ']' 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:40.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:40.718 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:40.979 00:09:40.979 real 0m2.088s 00:09:40.979 user 0m0.864s 00:09:40.979 sys 0m0.154s 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.979 06:21:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.979 ************************************ 00:09:40.979 END TEST locking_overlapped_coremask_via_rpc 00:09:40.979 ************************************ 00:09:40.979 06:21:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:40.979 06:21:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2486952 ]] 00:09:40.979 06:21:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2486952 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2486952 ']' 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2486952 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2486952 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2486952' 00:09:40.979 killing process with pid 2486952 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2486952 00:09:40.979 06:21:00 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2486952 00:09:41.240 06:21:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2487054 ]] 00:09:41.240 06:21:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2487054 00:09:41.240 06:21:00 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2487054 ']' 00:09:41.240 06:21:00 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2487054 00:09:41.240 06:21:00 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:09:41.240 06:21:00 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.240 06:21:00 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2487054 00:09:41.240 06:21:01 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:09:41.240 06:21:01 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:09:41.240 06:21:01 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2487054' 00:09:41.240 killing process with pid 2487054 00:09:41.240 06:21:01 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2487054 00:09:41.240 06:21:01 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2487054 00:09:41.500 06:21:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:41.500 06:21:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:41.500 06:21:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2486952 ]] 00:09:41.500 06:21:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2486952 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2486952 ']' 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2486952 00:09:41.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2486952) - No such process 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2486952 is not found' 00:09:41.500 Process with pid 2486952 is not found 00:09:41.500 06:21:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2487054 ]] 00:09:41.500 06:21:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2487054 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2487054 ']' 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2487054 00:09:41.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2487054) - No such process 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2487054 is not found' 00:09:41.500 Process with pid 2487054 is not found 00:09:41.500 06:21:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:41.500 00:09:41.500 real 0m16.464s 00:09:41.500 user 0m28.565s 00:09:41.500 sys 0m5.073s 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.500 06:21:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:41.500 ************************************ 00:09:41.500 END TEST cpu_locks 00:09:41.500 ************************************ 00:09:41.500 00:09:41.500 real 0m43.026s 00:09:41.500 user 1m24.951s 00:09:41.500 sys 0m8.518s 00:09:41.500 06:21:01 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.500 06:21:01 event -- common/autotest_common.sh@10 -- # set +x 00:09:41.500 ************************************ 00:09:41.500 END TEST event 00:09:41.500 ************************************ 00:09:41.500 06:21:01 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:41.500 06:21:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:41.500 06:21:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.500 06:21:01 -- common/autotest_common.sh@10 -- # set +x 00:09:41.500 ************************************ 00:09:41.500 START TEST thread 00:09:41.500 ************************************ 00:09:41.500 06:21:01 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:41.762 * Looking for test storage... 00:09:41.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:41.762 06:21:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.762 06:21:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.762 06:21:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.762 06:21:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.762 06:21:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.762 06:21:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.762 06:21:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.762 06:21:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.762 06:21:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.762 06:21:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.762 06:21:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.762 06:21:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:41.762 06:21:01 thread -- scripts/common.sh@345 -- # : 1 00:09:41.762 06:21:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.762 06:21:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.762 06:21:01 thread -- scripts/common.sh@365 -- # decimal 1 00:09:41.762 06:21:01 thread -- scripts/common.sh@353 -- # local d=1 00:09:41.762 06:21:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.762 06:21:01 thread -- scripts/common.sh@355 -- # echo 1 00:09:41.762 06:21:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.762 06:21:01 thread -- scripts/common.sh@366 -- # decimal 2 00:09:41.762 06:21:01 thread -- scripts/common.sh@353 -- # local d=2 00:09:41.762 06:21:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.762 06:21:01 thread -- scripts/common.sh@355 -- # echo 2 00:09:41.762 06:21:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.762 06:21:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.762 06:21:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.762 06:21:01 thread -- scripts/common.sh@368 -- # return 0 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:41.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.762 --rc genhtml_branch_coverage=1 00:09:41.762 --rc genhtml_function_coverage=1 00:09:41.762 --rc genhtml_legend=1 00:09:41.762 --rc geninfo_all_blocks=1 00:09:41.762 --rc geninfo_unexecuted_blocks=1 00:09:41.762 00:09:41.762 ' 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:41.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.762 --rc genhtml_branch_coverage=1 00:09:41.762 --rc genhtml_function_coverage=1 00:09:41.762 --rc genhtml_legend=1 00:09:41.762 --rc geninfo_all_blocks=1 00:09:41.762 --rc geninfo_unexecuted_blocks=1 00:09:41.762 00:09:41.762 ' 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:41.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.762 --rc genhtml_branch_coverage=1 00:09:41.762 --rc genhtml_function_coverage=1 00:09:41.762 --rc genhtml_legend=1 00:09:41.762 --rc geninfo_all_blocks=1 00:09:41.762 --rc geninfo_unexecuted_blocks=1 00:09:41.762 00:09:41.762 ' 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:41.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.762 --rc genhtml_branch_coverage=1 00:09:41.762 --rc genhtml_function_coverage=1 00:09:41.762 --rc genhtml_legend=1 00:09:41.762 --rc geninfo_all_blocks=1 00:09:41.762 --rc geninfo_unexecuted_blocks=1 00:09:41.762 00:09:41.762 ' 00:09:41.762 06:21:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.762 06:21:01 thread -- common/autotest_common.sh@10 -- # set +x 00:09:41.762 ************************************ 00:09:41.762 START TEST thread_poller_perf 00:09:41.762 ************************************ 00:09:41.762 06:21:01 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:41.762 [2024-11-20 06:21:01.629010] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:41.762 [2024-11-20 06:21:01.629127] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487625 ] 00:09:42.023 [2024-11-20 06:21:01.721506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.023 [2024-11-20 06:21:01.762851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.023 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:42.962 [2024-11-20T05:21:02.882Z] ====================================== 00:09:42.962 [2024-11-20T05:21:02.882Z] busy:2404929256 (cyc) 00:09:42.962 [2024-11-20T05:21:02.882Z] total_run_count: 417000 00:09:42.962 [2024-11-20T05:21:02.882Z] tsc_hz: 2400000000 (cyc) 00:09:42.962 [2024-11-20T05:21:02.882Z] ====================================== 00:09:42.962 [2024-11-20T05:21:02.882Z] poller_cost: 5767 (cyc), 2402 (nsec) 00:09:42.962 00:09:42.962 real 0m1.189s 00:09:42.962 user 0m1.102s 00:09:42.962 sys 0m0.081s 00:09:42.962 06:21:02 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.962 06:21:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:42.962 ************************************ 00:09:42.962 END TEST thread_poller_perf 00:09:42.962 ************************************ 00:09:42.962 06:21:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:42.962 06:21:02 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:09:42.962 06:21:02 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.962 06:21:02 thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.962 ************************************ 00:09:42.962 START TEST thread_poller_perf 00:09:42.962 ************************************ 00:09:42.962 06:21:02 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:43.223 [2024-11-20 06:21:02.896990] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:43.223 [2024-11-20 06:21:02.897089] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487961 ] 00:09:43.223 [2024-11-20 06:21:02.988392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.223 [2024-11-20 06:21:03.023889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.223 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:44.163 [2024-11-20T05:21:04.083Z] ====================================== 00:09:44.163 [2024-11-20T05:21:04.083Z] busy:2401448584 (cyc) 00:09:44.163 [2024-11-20T05:21:04.083Z] total_run_count: 5561000 00:09:44.163 [2024-11-20T05:21:04.083Z] tsc_hz: 2400000000 (cyc) 00:09:44.163 [2024-11-20T05:21:04.083Z] ====================================== 00:09:44.163 [2024-11-20T05:21:04.083Z] poller_cost: 431 (cyc), 179 (nsec) 00:09:44.163 00:09:44.163 real 0m1.177s 00:09:44.163 user 0m1.096s 00:09:44.163 sys 0m0.077s 00:09:44.163 06:21:04 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:44.163 06:21:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:44.163 ************************************ 00:09:44.163 END TEST thread_poller_perf 00:09:44.163 ************************************ 00:09:44.423 06:21:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:44.423 00:09:44.423 real 0m2.724s 00:09:44.423 user 0m2.372s 00:09:44.423 sys 0m0.365s 00:09:44.423 06:21:04 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:44.423 06:21:04 thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.423 ************************************ 00:09:44.423 END TEST thread 00:09:44.423 ************************************ 00:09:44.423 06:21:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:44.423 06:21:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:44.423 06:21:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:44.423 06:21:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:44.423 06:21:04 -- common/autotest_common.sh@10 -- # set +x 00:09:44.423 ************************************ 00:09:44.423 START TEST app_cmdline 00:09:44.423 ************************************ 00:09:44.423 06:21:04 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:44.424 * Looking for test storage... 00:09:44.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:44.424 06:21:04 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:44.424 06:21:04 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:09:44.424 06:21:04 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:44.424 06:21:04 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:44.424 06:21:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.424 06:21:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.685 06:21:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.685 --rc genhtml_branch_coverage=1 00:09:44.685 --rc genhtml_function_coverage=1 00:09:44.685 --rc genhtml_legend=1 00:09:44.685 --rc geninfo_all_blocks=1 00:09:44.685 --rc geninfo_unexecuted_blocks=1 00:09:44.685 00:09:44.685 ' 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.685 --rc genhtml_branch_coverage=1 00:09:44.685 --rc genhtml_function_coverage=1 00:09:44.685 --rc genhtml_legend=1 00:09:44.685 --rc geninfo_all_blocks=1 00:09:44.685 --rc geninfo_unexecuted_blocks=1 00:09:44.685 00:09:44.685 ' 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.685 --rc genhtml_branch_coverage=1 00:09:44.685 --rc genhtml_function_coverage=1 00:09:44.685 --rc genhtml_legend=1 00:09:44.685 --rc geninfo_all_blocks=1 00:09:44.685 --rc geninfo_unexecuted_blocks=1 00:09:44.685 00:09:44.685 ' 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:44.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.685 --rc genhtml_branch_coverage=1 00:09:44.685 --rc genhtml_function_coverage=1 00:09:44.685 --rc genhtml_legend=1 00:09:44.685 --rc geninfo_all_blocks=1 00:09:44.685 --rc geninfo_unexecuted_blocks=1 00:09:44.685 00:09:44.685 ' 00:09:44.685 06:21:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:44.685 06:21:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2488365 00:09:44.685 06:21:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2488365 00:09:44.685 06:21:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2488365 ']' 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:44.685 06:21:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:44.685 [2024-11-20 06:21:04.428446] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:44.685 [2024-11-20 06:21:04.428523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488365 ] 00:09:44.685 [2024-11-20 06:21:04.516502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.685 [2024-11-20 06:21:04.551116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:45.626 { 00:09:45.626 "version": "SPDK v25.01-pre git sha1 57b682926", 00:09:45.626 "fields": { 00:09:45.626 "major": 25, 00:09:45.626 "minor": 1, 00:09:45.626 "patch": 0, 00:09:45.626 "suffix": "-pre", 00:09:45.626 "commit": "57b682926" 00:09:45.626 } 00:09:45.626 } 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:45.626 06:21:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.626 06:21:05 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.627 06:21:05 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:45.627 06:21:05 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.627 06:21:05 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:45.627 06:21:05 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:45.887 request: 00:09:45.887 { 00:09:45.887 "method": "env_dpdk_get_mem_stats", 00:09:45.887 "req_id": 1 00:09:45.887 } 00:09:45.887 Got JSON-RPC error response 00:09:45.887 response: 00:09:45.887 { 00:09:45.887 "code": -32601, 00:09:45.887 "message": "Method not found" 00:09:45.887 } 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:45.887 06:21:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2488365 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2488365 ']' 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2488365 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2488365 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2488365' 00:09:45.887 killing process with pid 2488365 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@971 -- # kill 2488365 00:09:45.887 06:21:05 app_cmdline -- common/autotest_common.sh@976 -- # wait 2488365 00:09:46.147 00:09:46.147 real 0m1.667s 00:09:46.147 user 0m2.000s 00:09:46.147 sys 0m0.442s 00:09:46.147 06:21:05 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:46.147 06:21:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:46.147 ************************************ 00:09:46.147 END TEST app_cmdline 00:09:46.147 ************************************ 00:09:46.147 06:21:05 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:46.147 06:21:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:46.147 06:21:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.147 06:21:05 -- common/autotest_common.sh@10 -- # set +x 00:09:46.147 ************************************ 00:09:46.147 START TEST version 00:09:46.147 ************************************ 00:09:46.147 06:21:05 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:46.147 * Looking for test storage... 00:09:46.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:46.147 06:21:06 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.147 06:21:06 version -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.147 06:21:06 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.407 06:21:06 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.407 06:21:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.407 06:21:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.407 06:21:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.407 06:21:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.407 06:21:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.407 06:21:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.407 06:21:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.407 06:21:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.407 06:21:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.407 06:21:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.407 06:21:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.407 06:21:06 version -- scripts/common.sh@344 -- # case "$op" in 00:09:46.407 06:21:06 version -- scripts/common.sh@345 -- # : 1 00:09:46.407 06:21:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.407 06:21:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.407 06:21:06 version -- scripts/common.sh@365 -- # decimal 1 00:09:46.407 06:21:06 version -- scripts/common.sh@353 -- # local d=1 00:09:46.407 06:21:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.407 06:21:06 version -- scripts/common.sh@355 -- # echo 1 00:09:46.407 06:21:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.407 06:21:06 version -- scripts/common.sh@366 -- # decimal 2 00:09:46.407 06:21:06 version -- scripts/common.sh@353 -- # local d=2 00:09:46.407 06:21:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.407 06:21:06 version -- scripts/common.sh@355 -- # echo 2 00:09:46.407 06:21:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.407 06:21:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.407 06:21:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.407 06:21:06 version -- scripts/common.sh@368 -- # return 0 00:09:46.407 06:21:06 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.407 06:21:06 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.407 --rc genhtml_branch_coverage=1 00:09:46.407 --rc genhtml_function_coverage=1 00:09:46.407 --rc genhtml_legend=1 00:09:46.407 --rc geninfo_all_blocks=1 00:09:46.407 --rc geninfo_unexecuted_blocks=1 00:09:46.407 00:09:46.407 ' 00:09:46.407 06:21:06 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.407 --rc genhtml_branch_coverage=1 00:09:46.407 --rc genhtml_function_coverage=1 00:09:46.407 --rc genhtml_legend=1 00:09:46.407 --rc geninfo_all_blocks=1 00:09:46.407 --rc geninfo_unexecuted_blocks=1 00:09:46.407 00:09:46.407 ' 00:09:46.407 06:21:06 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.407 --rc genhtml_branch_coverage=1 00:09:46.407 --rc genhtml_function_coverage=1 00:09:46.407 --rc genhtml_legend=1 00:09:46.407 --rc geninfo_all_blocks=1 00:09:46.407 --rc geninfo_unexecuted_blocks=1 00:09:46.407 00:09:46.407 ' 00:09:46.407 06:21:06 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.407 --rc genhtml_branch_coverage=1 00:09:46.407 --rc genhtml_function_coverage=1 00:09:46.407 --rc genhtml_legend=1 00:09:46.407 --rc geninfo_all_blocks=1 00:09:46.407 --rc geninfo_unexecuted_blocks=1 00:09:46.407 00:09:46.407 ' 00:09:46.407 06:21:06 version -- app/version.sh@17 -- # get_header_version major 00:09:46.407 06:21:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # cut -f2 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:46.407 06:21:06 version -- app/version.sh@17 -- # major=25 00:09:46.407 06:21:06 version -- app/version.sh@18 -- # get_header_version minor 00:09:46.407 06:21:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # cut -f2 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:46.407 06:21:06 version -- app/version.sh@18 -- # minor=1 00:09:46.407 06:21:06 version -- app/version.sh@19 -- # get_header_version patch 00:09:46.407 06:21:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # cut -f2 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:46.407 06:21:06 version -- app/version.sh@19 -- # patch=0 00:09:46.407 06:21:06 version -- app/version.sh@20 -- # get_header_version suffix 00:09:46.407 06:21:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # cut -f2 00:09:46.407 06:21:06 version -- app/version.sh@14 -- # tr -d '"' 00:09:46.407 06:21:06 version -- app/version.sh@20 -- # suffix=-pre 00:09:46.407 06:21:06 version -- app/version.sh@22 -- # version=25.1 00:09:46.407 06:21:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:46.407 06:21:06 version -- app/version.sh@28 -- # version=25.1rc0 00:09:46.407 06:21:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:46.407 06:21:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:46.407 06:21:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:46.407 06:21:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:46.407 00:09:46.407 real 0m0.274s 00:09:46.407 user 0m0.164s 00:09:46.407 sys 0m0.159s 00:09:46.407 06:21:06 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:46.407 06:21:06 version -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 ************************************ 00:09:46.407 END TEST version 00:09:46.407 ************************************ 00:09:46.407 06:21:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:46.407 06:21:06 -- spdk/autotest.sh@194 -- # uname -s 00:09:46.407 06:21:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:46.407 06:21:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:46.407 06:21:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:46.407 06:21:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:46.407 06:21:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.407 06:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 06:21:06 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:46.407 06:21:06 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:46.407 06:21:06 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:46.407 06:21:06 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:46.407 06:21:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.407 06:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 ************************************ 00:09:46.407 START TEST nvmf_tcp 00:09:46.407 ************************************ 00:09:46.407 06:21:06 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:46.667 * Looking for test storage... 00:09:46.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:46.667 06:21:06 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.667 06:21:06 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.667 06:21:06 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.667 06:21:06 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:46.667 06:21:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:46.668 06:21:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.668 06:21:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:46.668 06:21:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.668 06:21:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.668 06:21:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.668 06:21:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.668 --rc genhtml_branch_coverage=1 00:09:46.668 --rc genhtml_function_coverage=1 00:09:46.668 --rc genhtml_legend=1 00:09:46.668 --rc geninfo_all_blocks=1 00:09:46.668 --rc geninfo_unexecuted_blocks=1 00:09:46.668 00:09:46.668 ' 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.668 --rc genhtml_branch_coverage=1 00:09:46.668 --rc genhtml_function_coverage=1 00:09:46.668 --rc genhtml_legend=1 00:09:46.668 --rc geninfo_all_blocks=1 00:09:46.668 --rc geninfo_unexecuted_blocks=1 00:09:46.668 00:09:46.668 ' 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.668 --rc genhtml_branch_coverage=1 00:09:46.668 --rc genhtml_function_coverage=1 00:09:46.668 --rc genhtml_legend=1 00:09:46.668 --rc geninfo_all_blocks=1 00:09:46.668 --rc geninfo_unexecuted_blocks=1 00:09:46.668 00:09:46.668 ' 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.668 --rc genhtml_branch_coverage=1 00:09:46.668 --rc genhtml_function_coverage=1 00:09:46.668 --rc genhtml_legend=1 00:09:46.668 --rc geninfo_all_blocks=1 00:09:46.668 --rc geninfo_unexecuted_blocks=1 00:09:46.668 00:09:46.668 ' 00:09:46.668 06:21:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:46.668 06:21:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:46.668 06:21:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.668 06:21:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:46.668 ************************************ 00:09:46.668 START TEST nvmf_target_core 00:09:46.668 ************************************ 00:09:46.668 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:46.928 * Looking for test storage... 00:09:46.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:46.928 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.929 --rc genhtml_branch_coverage=1 00:09:46.929 --rc genhtml_function_coverage=1 00:09:46.929 --rc genhtml_legend=1 00:09:46.929 --rc geninfo_all_blocks=1 00:09:46.929 --rc geninfo_unexecuted_blocks=1 00:09:46.929 00:09:46.929 ' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.929 --rc genhtml_branch_coverage=1 00:09:46.929 --rc genhtml_function_coverage=1 00:09:46.929 --rc genhtml_legend=1 00:09:46.929 --rc geninfo_all_blocks=1 00:09:46.929 --rc geninfo_unexecuted_blocks=1 00:09:46.929 00:09:46.929 ' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.929 --rc genhtml_branch_coverage=1 00:09:46.929 --rc genhtml_function_coverage=1 00:09:46.929 --rc genhtml_legend=1 00:09:46.929 --rc geninfo_all_blocks=1 00:09:46.929 --rc geninfo_unexecuted_blocks=1 00:09:46.929 00:09:46.929 ' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.929 --rc genhtml_branch_coverage=1 00:09:46.929 --rc genhtml_function_coverage=1 00:09:46.929 --rc genhtml_legend=1 00:09:46.929 --rc geninfo_all_blocks=1 00:09:46.929 --rc geninfo_unexecuted_blocks=1 00:09:46.929 00:09:46.929 ' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:46.929 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:46.930 06:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:46.930 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:46.930 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.930 06:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.930 ************************************ 00:09:46.930 START TEST nvmf_abort 00:09:46.930 ************************************ 00:09:46.930 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:47.191 * Looking for test storage... 00:09:47.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.191 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:47.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.191 --rc genhtml_branch_coverage=1 00:09:47.191 --rc genhtml_function_coverage=1 00:09:47.191 --rc genhtml_legend=1 00:09:47.191 --rc geninfo_all_blocks=1 00:09:47.191 --rc geninfo_unexecuted_blocks=1 00:09:47.191 00:09:47.191 ' 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:47.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.191 --rc genhtml_branch_coverage=1 00:09:47.191 --rc genhtml_function_coverage=1 00:09:47.191 --rc genhtml_legend=1 00:09:47.191 --rc geninfo_all_blocks=1 00:09:47.191 --rc geninfo_unexecuted_blocks=1 00:09:47.191 00:09:47.191 ' 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:47.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.191 --rc genhtml_branch_coverage=1 00:09:47.191 --rc genhtml_function_coverage=1 00:09:47.191 --rc genhtml_legend=1 00:09:47.191 --rc geninfo_all_blocks=1 00:09:47.191 --rc geninfo_unexecuted_blocks=1 00:09:47.191 00:09:47.191 ' 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:47.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.191 --rc genhtml_branch_coverage=1 00:09:47.191 --rc genhtml_function_coverage=1 00:09:47.191 --rc genhtml_legend=1 00:09:47.191 --rc geninfo_all_blocks=1 00:09:47.191 --rc geninfo_unexecuted_blocks=1 00:09:47.191 00:09:47.191 ' 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.191 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.192 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:55.331 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:55.331 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:55.331 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:55.332 Found net devices under 0000:31:00.0: cvl_0_0 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:55.332 Found net devices under 0000:31:00.1: cvl_0_1 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:55.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:09:55.332 00:09:55.332 --- 10.0.0.2 ping statistics --- 00:09:55.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.332 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:09:55.332 00:09:55.332 --- 10.0.0.1 ping statistics --- 00:09:55.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.332 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2493347 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2493347 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2493347 ']' 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:55.332 06:21:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.332 [2024-11-20 06:21:14.801060] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:09:55.332 [2024-11-20 06:21:14.801126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.332 [2024-11-20 06:21:14.900571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.332 [2024-11-20 06:21:14.954618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.332 [2024-11-20 06:21:14.954667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.332 [2024-11-20 06:21:14.954676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.332 [2024-11-20 06:21:14.954684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.332 [2024-11-20 06:21:14.954690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.332 [2024-11-20 06:21:14.956814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.332 [2024-11-20 06:21:14.956974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.332 [2024-11-20 06:21:14.956975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.904 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:55.904 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 [2024-11-20 06:21:15.685512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 Malloc0 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 Delay0 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 [2024-11-20 06:21:15.773673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.905 06:21:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:56.165 [2024-11-20 06:21:15.925313] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:58.711 Initializing NVMe Controllers 00:09:58.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:58.711 controller IO queue size 128 less than required 00:09:58.711 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:58.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:58.711 Initialization complete. Launching workers. 00:09:58.711 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28512 00:09:58.711 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28577, failed to submit 62 00:09:58.711 success 28516, unsuccessful 61, failed 0 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.711 rmmod nvme_tcp 00:09:58.711 rmmod nvme_fabrics 00:09:58.711 rmmod nvme_keyring 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2493347 ']' 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2493347 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2493347 ']' 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2493347 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2493347 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2493347' 00:09:58.711 killing process with pid 2493347 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2493347 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2493347 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.711 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.625 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.625 00:10:00.625 real 0m13.639s 00:10:00.625 user 0m14.235s 00:10:00.625 sys 0m6.812s 00:10:00.625 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.625 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.626 ************************************ 00:10:00.626 END TEST nvmf_abort 00:10:00.626 ************************************ 00:10:00.626 06:21:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:00.626 06:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:00.626 06:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.626 06:21:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.626 ************************************ 00:10:00.626 START TEST nvmf_ns_hotplug_stress 00:10:00.626 ************************************ 00:10:00.626 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:00.887 * Looking for test storage... 00:10:00.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:00.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.887 --rc genhtml_branch_coverage=1 00:10:00.887 --rc genhtml_function_coverage=1 00:10:00.887 --rc genhtml_legend=1 00:10:00.887 --rc geninfo_all_blocks=1 00:10:00.887 --rc geninfo_unexecuted_blocks=1 00:10:00.887 00:10:00.887 ' 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:00.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.887 --rc genhtml_branch_coverage=1 00:10:00.887 --rc genhtml_function_coverage=1 00:10:00.887 --rc genhtml_legend=1 00:10:00.887 --rc geninfo_all_blocks=1 00:10:00.887 --rc geninfo_unexecuted_blocks=1 00:10:00.887 00:10:00.887 ' 00:10:00.887 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:00.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.887 --rc genhtml_branch_coverage=1 00:10:00.888 --rc genhtml_function_coverage=1 00:10:00.888 --rc genhtml_legend=1 00:10:00.888 --rc geninfo_all_blocks=1 00:10:00.888 --rc geninfo_unexecuted_blocks=1 00:10:00.888 00:10:00.888 ' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:00.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.888 --rc genhtml_branch_coverage=1 00:10:00.888 --rc genhtml_function_coverage=1 00:10:00.888 --rc genhtml_legend=1 00:10:00.888 --rc geninfo_all_blocks=1 00:10:00.888 --rc geninfo_unexecuted_blocks=1 00:10:00.888 00:10:00.888 ' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.888 06:21:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.162 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:09.162 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:09.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:09.163 Found net devices under 0000:31:00.0: cvl_0_0 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:09.163 Found net devices under 0000:31:00.1: cvl_0_1 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:09.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:10:09.163 00:10:09.163 --- 10.0.0.2 ping statistics --- 00:10:09.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.163 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:10:09.163 00:10:09.163 --- 10.0.0.1 ping statistics --- 00:10:09.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.163 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2498425 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2498425 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2498425 ']' 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:09.163 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.163 [2024-11-20 06:21:28.549047] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:10:09.163 [2024-11-20 06:21:28.549115] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.163 [2024-11-20 06:21:28.646476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.163 [2024-11-20 06:21:28.697521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.163 [2024-11-20 06:21:28.697572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.163 [2024-11-20 06:21:28.697581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.163 [2024-11-20 06:21:28.697588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.163 [2024-11-20 06:21:28.697595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.164 [2024-11-20 06:21:28.699627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.164 [2024-11-20 06:21:28.699806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.164 [2024-11-20 06:21:28.699851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.731 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:09.732 [2024-11-20 06:21:29.582432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.732 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:09.991 06:21:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.252 [2024-11-20 06:21:29.981389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.252 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:10.512 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:10.512 Malloc0 00:10:10.772 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:10.772 Delay0 00:10:10.772 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.033 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:11.292 NULL1 00:10:11.292 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:11.292 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2498838 00:10:11.552 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:11.552 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:11.552 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.552 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.812 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:11.812 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:12.072 true 00:10:12.072 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:12.072 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.072 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.332 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:12.332 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:12.592 true 00:10:12.592 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:12.592 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.592 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.853 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:12.853 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:13.112 true 00:10:13.112 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:13.112 06:21:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.112 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.373 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:13.373 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:13.634 true 00:10:13.634 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:13.634 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.894 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.894 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:13.894 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:14.153 true 00:10:14.153 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:14.153 06:21:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.414 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.414 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:14.414 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:14.674 true 00:10:14.674 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:14.674 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.934 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.934 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:14.934 06:21:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:15.194 true 00:10:15.194 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:15.194 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.454 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.715 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:15.715 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:15.715 true 00:10:15.715 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:15.715 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.976 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.237 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:16.237 06:21:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:16.237 true 00:10:16.237 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:16.237 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.497 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.758 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:16.758 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:16.758 true 00:10:16.758 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:16.758 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.019 06:21:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.279 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:17.279 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:17.540 true 00:10:17.540 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:17.540 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.540 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.800 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:17.800 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:18.061 true 00:10:18.061 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:18.061 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.061 06:21:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.321 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:18.321 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:18.582 true 00:10:18.582 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:18.582 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.843 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.843 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:18.843 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:19.103 true 00:10:19.103 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:19.103 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.363 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.363 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:19.363 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:19.623 true 00:10:19.623 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:19.623 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.883 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.883 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:19.883 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:20.145 true 00:10:20.145 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:20.145 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.405 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.405 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:20.406 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:20.665 true 00:10:20.665 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:20.665 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.926 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.186 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:21.186 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:21.186 true 00:10:21.186 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:21.186 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.447 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.707 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:21.707 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:21.707 true 00:10:21.707 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:21.707 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.967 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.229 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:22.229 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:22.229 true 00:10:22.229 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:22.229 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.489 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.748 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:22.748 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:22.748 true 00:10:22.749 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:22.749 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.009 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.269 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:23.269 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:23.269 true 00:10:23.529 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:23.529 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.529 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.789 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:23.789 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:24.049 true 00:10:24.049 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:24.049 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.049 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.310 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:24.310 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:24.570 true 00:10:24.570 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:24.570 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.570 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.830 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:24.830 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:25.089 true 00:10:25.089 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:25.089 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.350 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.350 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:25.350 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:25.610 true 00:10:25.610 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:25.610 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.869 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.869 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:25.869 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:26.129 true 00:10:26.129 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:26.129 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.390 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.650 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:26.650 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:26.650 true 00:10:26.650 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:26.650 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.911 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.172 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:27.172 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:27.172 true 00:10:27.172 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:27.172 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.433 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.694 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:27.694 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:27.694 true 00:10:27.694 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:27.694 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.954 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.216 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:28.216 06:21:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:28.216 true 00:10:28.476 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:28.476 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.476 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.737 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:28.737 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:28.999 true 00:10:28.999 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:28.999 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.999 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.259 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:29.259 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:29.519 true 00:10:29.519 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:29.519 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.780 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.780 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:29.780 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:30.040 true 00:10:30.040 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:30.040 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.300 06:21:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.300 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:30.300 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:30.559 true 00:10:30.559 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:30.559 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.820 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.081 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:31.081 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:31.081 true 00:10:31.081 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:31.081 06:21:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.342 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.602 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:31.602 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:31.602 true 00:10:31.602 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:31.602 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.862 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.123 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:32.123 06:21:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:32.123 true 00:10:32.123 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:32.123 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.383 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.643 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:32.643 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:32.643 true 00:10:32.903 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:32.903 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.903 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.164 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:33.164 06:21:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:33.164 true 00:10:33.425 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:33.425 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.425 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.686 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:33.686 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:33.946 true 00:10:33.946 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:33.946 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.946 06:21:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.207 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:34.207 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:34.466 true 00:10:34.467 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:34.467 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.727 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.727 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:34.727 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:34.988 true 00:10:34.988 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:34.988 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.249 06:21:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.249 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:35.249 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:35.509 true 00:10:35.509 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:35.509 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.770 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.031 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:36.031 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:36.031 true 00:10:36.031 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:36.031 06:21:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.291 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.552 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:36.552 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:36.552 true 00:10:36.552 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:36.552 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.811 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.071 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:37.071 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:37.071 true 00:10:37.331 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:37.331 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.331 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.592 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:37.592 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:37.852 true 00:10:37.852 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:37.852 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.852 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.112 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:38.112 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:38.372 true 00:10:38.372 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:38.372 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.633 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.633 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:38.633 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:38.893 true 00:10:38.893 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:38.893 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.153 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.153 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:39.153 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:39.419 true 00:10:39.419 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:39.419 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.684 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.943 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:39.943 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:39.943 true 00:10:39.943 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:39.943 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.203 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.463 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:40.463 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:40.463 true 00:10:40.463 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:40.463 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.723 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.984 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:40.985 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:41.245 true 00:10:41.245 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:41.245 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.245 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.505 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:41.505 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:41.765 true 00:10:41.765 Initializing NVMe Controllers 00:10:41.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.765 Controller IO queue size 128, less than required. 00:10:41.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:41.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:41.765 Initialization complete. Launching workers. 00:10:41.765 ======================================================== 00:10:41.765 Latency(us) 00:10:41.765 Device Information : IOPS MiB/s Average min max 00:10:41.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30717.71 15.00 4166.89 1118.28 11339.23 00:10:41.766 ======================================================== 00:10:41.766 Total : 30717.71 15.00 4166.89 1118.28 11339.23 00:10:41.766 00:10:41.766 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2498838 00:10:41.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2498838) - No such process 00:10:41.766 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2498838 00:10:41.766 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.766 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.026 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:42.026 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:42.026 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:42.026 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.026 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:42.286 null0 00:10:42.286 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:42.286 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.286 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:42.286 null1 00:10:42.547 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:42.547 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.547 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:42.547 null2 00:10:42.547 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:42.547 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.547 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:42.807 null3 00:10:42.807 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:42.807 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:42.807 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:42.807 null4 00:10:43.067 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.067 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.067 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:43.067 null5 00:10:43.067 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.067 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.067 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:43.327 null6 00:10:43.327 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.327 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.327 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:43.589 null7 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.589 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2505395 2505396 2505398 2505400 2505402 2505404 2505406 2505408 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.590 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.852 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.114 06:22:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:44.114 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:44.114 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.114 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.114 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.114 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.382 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:44.383 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.383 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.383 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:44.383 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.383 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.686 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:45.005 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:45.005 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.006 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:45.280 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.280 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.280 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:45.280 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.280 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.541 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.542 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:45.542 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.542 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:45.542 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:45.542 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:45.803 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:45.804 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.064 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.065 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.329 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.329 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.329 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.329 06:22:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.329 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.590 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.852 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.113 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.113 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.113 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.114 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.114 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.114 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.114 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.374 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.374 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.374 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.374 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.374 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.374 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.374 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.633 rmmod nvme_tcp 00:10:47.633 rmmod nvme_fabrics 00:10:47.633 rmmod nvme_keyring 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.633 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2498425 ']' 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2498425 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2498425 ']' 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2498425 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2498425 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2498425' 00:10:47.634 killing process with pid 2498425 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2498425 00:10:47.634 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2498425 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.894 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.804 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:49.804 00:10:49.804 real 0m49.118s 00:10:49.804 user 3m19.068s 00:10:49.804 sys 0m17.686s 00:10:49.804 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:49.804 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.804 ************************************ 00:10:49.804 END TEST nvmf_ns_hotplug_stress 00:10:49.804 ************************************ 00:10:49.805 06:22:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:49.805 06:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:49.805 06:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:49.805 06:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.066 ************************************ 00:10:50.066 START TEST nvmf_delete_subsystem 00:10:50.066 ************************************ 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:50.066 * Looking for test storage... 00:10:50.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.066 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.067 --rc genhtml_branch_coverage=1 00:10:50.067 --rc genhtml_function_coverage=1 00:10:50.067 --rc genhtml_legend=1 00:10:50.067 --rc geninfo_all_blocks=1 00:10:50.067 --rc geninfo_unexecuted_blocks=1 00:10:50.067 00:10:50.067 ' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.067 --rc genhtml_branch_coverage=1 00:10:50.067 --rc genhtml_function_coverage=1 00:10:50.067 --rc genhtml_legend=1 00:10:50.067 --rc geninfo_all_blocks=1 00:10:50.067 --rc geninfo_unexecuted_blocks=1 00:10:50.067 00:10:50.067 ' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.067 --rc genhtml_branch_coverage=1 00:10:50.067 --rc genhtml_function_coverage=1 00:10:50.067 --rc genhtml_legend=1 00:10:50.067 --rc geninfo_all_blocks=1 00:10:50.067 --rc geninfo_unexecuted_blocks=1 00:10:50.067 00:10:50.067 ' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:50.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.067 --rc genhtml_branch_coverage=1 00:10:50.067 --rc genhtml_function_coverage=1 00:10:50.067 --rc genhtml_legend=1 00:10:50.067 --rc geninfo_all_blocks=1 00:10:50.067 --rc geninfo_unexecuted_blocks=1 00:10:50.067 00:10:50.067 ' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.067 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.068 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.068 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.068 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.068 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.207 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:58.208 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:58.208 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:58.208 Found net devices under 0000:31:00.0: cvl_0_0 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:58.208 Found net devices under 0000:31:00.1: cvl_0_1 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:10:58.208 00:10:58.208 --- 10.0.0.2 ping statistics --- 00:10:58.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.208 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:58.208 00:10:58.208 --- 10.0.0.1 ping statistics --- 00:10:58.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.208 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2510850 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2510850 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2510850 ']' 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.208 06:22:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.208 [2024-11-20 06:22:17.561247] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:10:58.209 [2024-11-20 06:22:17.561315] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.209 [2024-11-20 06:22:17.661399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:58.209 [2024-11-20 06:22:17.711030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.209 [2024-11-20 06:22:17.711086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.209 [2024-11-20 06:22:17.711094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.209 [2024-11-20 06:22:17.711101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.209 [2024-11-20 06:22:17.711107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.209 [2024-11-20 06:22:17.712959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.209 [2024-11-20 06:22:17.713087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.471 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:58.471 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:10:58.471 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:58.471 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.471 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.733 [2024-11-20 06:22:18.432576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.733 [2024-11-20 06:22:18.456888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.733 NULL1 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.733 Delay0 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2510954 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:58.733 06:22:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:58.733 [2024-11-20 06:22:18.583887] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:00.651 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.651 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.651 06:22:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 [2024-11-20 06:22:20.751919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149ef00 is same with the state(6) to be set 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Write completed with error (sct=0, sc=8) 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.913 starting I/O failed: -6 00:11:00.913 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 starting I/O failed: -6 00:11:00.914 starting I/O failed: -6 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 Write completed with error (sct=0, sc=8) 00:11:00.914 starting I/O failed: -6 00:11:00.914 Read completed with error (sct=0, sc=8) 00:11:00.914 [2024-11-20 06:22:20.755929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0d0000c40 is same with the state(6) to be set 00:11:01.856 [2024-11-20 06:22:21.724051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a05e0 is same with the state(6) to be set 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 [2024-11-20 06:22:21.755872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f4a0 is same with the state(6) to be set 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 [2024-11-20 06:22:21.756029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f0e0 is same with the state(6) to be set 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Read completed with error (sct=0, sc=8) 00:11:01.856 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 [2024-11-20 06:22:21.757629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0d000d7e0 is same with the state(6) to be set 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Write completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 Read completed with error (sct=0, sc=8) 00:11:01.857 [2024-11-20 06:22:21.758038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0d000d020 is same with the state(6) to be set 00:11:01.857 Initializing NVMe Controllers 00:11:01.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:01.857 Controller IO queue size 128, less than required. 00:11:01.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:01.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:01.857 Initialization complete. Launching workers. 00:11:01.857 ======================================================== 00:11:01.857 Latency(us) 00:11:01.857 Device Information : IOPS MiB/s Average min max 00:11:01.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.57 0.08 886623.96 374.29 1008745.96 00:11:01.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.54 0.09 943435.21 404.70 2001946.10 00:11:01.857 ======================================================== 00:11:01.857 Total : 352.11 0.17 915430.80 374.29 2001946.10 00:11:01.857 00:11:01.857 [2024-11-20 06:22:21.758587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a05e0 (9): Bad file descriptor 00:11:01.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:01.857 06:22:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.857 06:22:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:01.857 06:22:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2510954 00:11:01.857 06:22:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2510954 00:11:02.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2510954) - No such process 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2510954 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2510954 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2510954 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.429 [2024-11-20 06:22:22.287773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2511718 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:02.429 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:02.691 [2024-11-20 06:22:22.396410] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:02.953 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:02.953 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:02.953 06:22:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:03.523 06:22:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:03.523 06:22:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:03.523 06:22:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.094 06:22:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.094 06:22:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:04.094 06:22:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.666 06:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.666 06:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:04.666 06:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:04.926 06:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:04.926 06:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:04.926 06:22:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:05.497 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:05.497 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:05.497 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:05.758 Initializing NVMe Controllers 00:11:05.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:05.758 Controller IO queue size 128, less than required. 00:11:05.758 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:05.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:05.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:05.758 Initialization complete. Launching workers. 00:11:05.758 ======================================================== 00:11:05.758 Latency(us) 00:11:05.758 Device Information : IOPS MiB/s Average min max 00:11:05.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003292.70 1000205.14 1042726.92 00:11:05.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003271.81 1000427.67 1008075.83 00:11:05.758 ======================================================== 00:11:05.758 Total : 256.00 0.12 1003282.26 1000205.14 1042726.92 00:11:05.758 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2511718 00:11:06.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2511718) - No such process 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2511718 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.019 rmmod nvme_tcp 00:11:06.019 rmmod nvme_fabrics 00:11:06.019 rmmod nvme_keyring 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2510850 ']' 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2510850 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2510850 ']' 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2510850 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.019 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2510850 00:11:06.280 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.280 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.280 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2510850' 00:11:06.280 killing process with pid 2510850 00:11:06.280 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2510850 00:11:06.280 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2510850 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.280 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.860 00:11:08.860 real 0m18.438s 00:11:08.860 user 0m30.805s 00:11:08.860 sys 0m6.943s 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:08.860 ************************************ 00:11:08.860 END TEST nvmf_delete_subsystem 00:11:08.860 ************************************ 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.860 ************************************ 00:11:08.860 START TEST nvmf_host_management 00:11:08.860 ************************************ 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:08.860 * Looking for test storage... 00:11:08.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.860 --rc genhtml_branch_coverage=1 00:11:08.860 --rc genhtml_function_coverage=1 00:11:08.860 --rc genhtml_legend=1 00:11:08.860 --rc geninfo_all_blocks=1 00:11:08.860 --rc geninfo_unexecuted_blocks=1 00:11:08.860 00:11:08.860 ' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.860 --rc genhtml_branch_coverage=1 00:11:08.860 --rc genhtml_function_coverage=1 00:11:08.860 --rc genhtml_legend=1 00:11:08.860 --rc geninfo_all_blocks=1 00:11:08.860 --rc geninfo_unexecuted_blocks=1 00:11:08.860 00:11:08.860 ' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.860 --rc genhtml_branch_coverage=1 00:11:08.860 --rc genhtml_function_coverage=1 00:11:08.860 --rc genhtml_legend=1 00:11:08.860 --rc geninfo_all_blocks=1 00:11:08.860 --rc geninfo_unexecuted_blocks=1 00:11:08.860 00:11:08.860 ' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.860 --rc genhtml_branch_coverage=1 00:11:08.860 --rc genhtml_function_coverage=1 00:11:08.860 --rc genhtml_legend=1 00:11:08.860 --rc geninfo_all_blocks=1 00:11:08.860 --rc geninfo_unexecuted_blocks=1 00:11:08.860 00:11:08.860 ' 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.860 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.861 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.005 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:17.006 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:17.006 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:17.006 Found net devices under 0000:31:00.0: cvl_0_0 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:17.006 Found net devices under 0000:31:00.1: cvl_0_1 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.006 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:11:17.006 00:11:17.006 --- 10.0.0.2 ping statistics --- 00:11:17.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.006 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:11:17.006 00:11:17.006 --- 10.0.0.1 ping statistics --- 00:11:17.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.006 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:17.006 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2516789 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2516789 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2516789 ']' 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:17.007 06:22:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.007 [2024-11-20 06:22:36.240895] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:11:17.007 [2024-11-20 06:22:36.240958] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.007 [2024-11-20 06:22:36.338949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.007 [2024-11-20 06:22:36.391944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.007 [2024-11-20 06:22:36.391997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.007 [2024-11-20 06:22:36.392005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.007 [2024-11-20 06:22:36.392012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.007 [2024-11-20 06:22:36.392018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.007 [2024-11-20 06:22:36.394466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.007 [2024-11-20 06:22:36.394966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.007 [2024-11-20 06:22:36.395135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.007 [2024-11-20 06:22:36.395136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.269 [2024-11-20 06:22:37.116987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.269 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.269 Malloc0 00:11:17.532 [2024-11-20 06:22:37.205288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2517061 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2517061 /var/tmp/bdevperf.sock 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2517061 ']' 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:17.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:17.532 { 00:11:17.532 "params": { 00:11:17.532 "name": "Nvme$subsystem", 00:11:17.532 "trtype": "$TEST_TRANSPORT", 00:11:17.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:17.532 "adrfam": "ipv4", 00:11:17.532 "trsvcid": "$NVMF_PORT", 00:11:17.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:17.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:17.532 "hdgst": ${hdgst:-false}, 00:11:17.532 "ddgst": ${ddgst:-false} 00:11:17.532 }, 00:11:17.532 "method": "bdev_nvme_attach_controller" 00:11:17.532 } 00:11:17.532 EOF 00:11:17.532 )") 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:17.532 06:22:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:17.532 "params": { 00:11:17.532 "name": "Nvme0", 00:11:17.532 "trtype": "tcp", 00:11:17.532 "traddr": "10.0.0.2", 00:11:17.532 "adrfam": "ipv4", 00:11:17.532 "trsvcid": "4420", 00:11:17.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:17.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:17.532 "hdgst": false, 00:11:17.532 "ddgst": false 00:11:17.532 }, 00:11:17.532 "method": "bdev_nvme_attach_controller" 00:11:17.532 }' 00:11:17.532 [2024-11-20 06:22:37.316212] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:11:17.532 [2024-11-20 06:22:37.316284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517061 ] 00:11:17.532 [2024-11-20 06:22:37.411910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.793 [2024-11-20 06:22:37.464928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.055 Running I/O for 10 seconds... 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.320 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.320 [2024-11-20 06:22:38.217106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.320 [2024-11-20 06:22:38.217390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.217626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfe910 is same with the state(6) to be set 00:11:18.321 [2024-11-20 06:22:38.218056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.321 [2024-11-20 06:22:38.218561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.321 [2024-11-20 06:22:38.218572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.218986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.218994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.322 [2024-11-20 06:22:38.219290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:18.322 [2024-11-20 06:22:38.219297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.323 [2024-11-20 06:22:38.219306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7ac60 is same with the state(6) to be set 00:11:18.323 [2024-11-20 06:22:38.220629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:18.323 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.323 task offset: 73728 on job bdev=Nvme0n1 fails 00:11:18.323 00:11:18.323 Latency(us) 00:11:18.323 [2024-11-20T05:22:38.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.323 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:18.323 Job: Nvme0n1 ended in about 0.42 seconds with error 00:11:18.323 Verification LBA range: start 0x0 length 0x400 00:11:18.323 Nvme0n1 : 0.42 1383.36 86.46 153.71 0.00 40374.38 6062.08 35607.89 00:11:18.323 [2024-11-20T05:22:38.243Z] =================================================================================================================== 00:11:18.323 [2024-11-20T05:22:38.243Z] Total : 1383.36 86.46 153.71 0.00 40374.38 6062.08 35607.89 00:11:18.323 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:18.323 [2024-11-20 06:22:38.222917] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:18.323 [2024-11-20 06:22:38.222960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a280 (9): Bad file descriptor 00:11:18.323 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.323 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:18.323 [2024-11-20 06:22:38.228646] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:18.323 [2024-11-20 06:22:38.228758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:18.323 [2024-11-20 06:22:38.228797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.323 [2024-11-20 06:22:38.228814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:18.323 [2024-11-20 06:22:38.228823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:18.323 [2024-11-20 06:22:38.228832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:18.323 [2024-11-20 06:22:38.228839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf6a280 00:11:18.323 [2024-11-20 06:22:38.228862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6a280 (9): Bad file descriptor 00:11:18.323 [2024-11-20 06:22:38.228878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:11:18.323 [2024-11-20 06:22:38.228888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:11:18.323 [2024-11-20 06:22:38.228900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:11:18.323 [2024-11-20 06:22:38.228912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:11:18.585 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.585 06:22:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:19.528 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2517061 00:11:19.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2517061) - No such process 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:19.529 { 00:11:19.529 "params": { 00:11:19.529 "name": "Nvme$subsystem", 00:11:19.529 "trtype": "$TEST_TRANSPORT", 00:11:19.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.529 "adrfam": "ipv4", 00:11:19.529 "trsvcid": "$NVMF_PORT", 00:11:19.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.529 "hdgst": ${hdgst:-false}, 00:11:19.529 "ddgst": ${ddgst:-false} 00:11:19.529 }, 00:11:19.529 "method": "bdev_nvme_attach_controller" 00:11:19.529 } 00:11:19.529 EOF 00:11:19.529 )") 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:19.529 06:22:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:19.529 "params": { 00:11:19.529 "name": "Nvme0", 00:11:19.529 "trtype": "tcp", 00:11:19.529 "traddr": "10.0.0.2", 00:11:19.529 "adrfam": "ipv4", 00:11:19.529 "trsvcid": "4420", 00:11:19.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:19.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:19.529 "hdgst": false, 00:11:19.529 "ddgst": false 00:11:19.529 }, 00:11:19.529 "method": "bdev_nvme_attach_controller" 00:11:19.529 }' 00:11:19.529 [2024-11-20 06:22:39.295525] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:11:19.529 [2024-11-20 06:22:39.295580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517418 ] 00:11:19.529 [2024-11-20 06:22:39.384898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.529 [2024-11-20 06:22:39.420853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.100 Running I/O for 1 seconds... 00:11:21.040 1600.00 IOPS, 100.00 MiB/s 00:11:21.040 Latency(us) 00:11:21.040 [2024-11-20T05:22:40.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.040 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:21.040 Verification LBA range: start 0x0 length 0x400 00:11:21.040 Nvme0n1 : 1.02 1635.02 102.19 0.00 0.00 38454.85 6307.84 31457.28 00:11:21.040 [2024-11-20T05:22:40.960Z] =================================================================================================================== 00:11:21.040 [2024-11-20T05:22:40.960Z] Total : 1635.02 102.19 0.00 0.00 38454.85 6307.84 31457.28 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.040 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.040 rmmod nvme_tcp 00:11:21.040 rmmod nvme_fabrics 00:11:21.301 rmmod nvme_keyring 00:11:21.301 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.301 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:21.301 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:21.301 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2516789 ']' 00:11:21.301 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2516789 00:11:21.301 06:22:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2516789 ']' 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2516789 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2516789 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2516789' 00:11:21.301 killing process with pid 2516789 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2516789 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2516789 00:11:21.301 [2024-11-20 06:22:41.163530] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.301 06:22:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:23.845 00:11:23.845 real 0m15.026s 00:11:23.845 user 0m24.035s 00:11:23.845 sys 0m6.922s 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:23.845 ************************************ 00:11:23.845 END TEST nvmf_host_management 00:11:23.845 ************************************ 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.845 ************************************ 00:11:23.845 START TEST nvmf_lvol 00:11:23.845 ************************************ 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:23.845 * Looking for test storage... 00:11:23.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:23.845 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.846 --rc genhtml_branch_coverage=1 00:11:23.846 --rc genhtml_function_coverage=1 00:11:23.846 --rc genhtml_legend=1 00:11:23.846 --rc geninfo_all_blocks=1 00:11:23.846 --rc geninfo_unexecuted_blocks=1 00:11:23.846 00:11:23.846 ' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.846 --rc genhtml_branch_coverage=1 00:11:23.846 --rc genhtml_function_coverage=1 00:11:23.846 --rc genhtml_legend=1 00:11:23.846 --rc geninfo_all_blocks=1 00:11:23.846 --rc geninfo_unexecuted_blocks=1 00:11:23.846 00:11:23.846 ' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.846 --rc genhtml_branch_coverage=1 00:11:23.846 --rc genhtml_function_coverage=1 00:11:23.846 --rc genhtml_legend=1 00:11:23.846 --rc geninfo_all_blocks=1 00:11:23.846 --rc geninfo_unexecuted_blocks=1 00:11:23.846 00:11:23.846 ' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:23.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.846 --rc genhtml_branch_coverage=1 00:11:23.846 --rc genhtml_function_coverage=1 00:11:23.846 --rc genhtml_legend=1 00:11:23.846 --rc geninfo_all_blocks=1 00:11:23.846 --rc geninfo_unexecuted_blocks=1 00:11:23.846 00:11:23.846 ' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.846 06:22:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:31.991 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:31.991 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:31.991 Found net devices under 0000:31:00.0: cvl_0_0 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:31.991 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:31.992 Found net devices under 0000:31:00.1: cvl_0_1 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.992 06:22:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:11:31.992 00:11:31.992 --- 10.0.0.2 ping statistics --- 00:11:31.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.992 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:11:31.992 00:11:31.992 --- 10.0.0.1 ping statistics --- 00:11:31.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.992 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2522131 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2522131 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2522131 ']' 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:31.992 06:22:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.992 [2024-11-20 06:22:51.296305] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:11:31.992 [2024-11-20 06:22:51.296369] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.992 [2024-11-20 06:22:51.396807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.992 [2024-11-20 06:22:51.449640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.992 [2024-11-20 06:22:51.449693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.992 [2024-11-20 06:22:51.449702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.992 [2024-11-20 06:22:51.449709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.992 [2024-11-20 06:22:51.449715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.992 [2024-11-20 06:22:51.451798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.992 [2024-11-20 06:22:51.451926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.992 [2024-11-20 06:22:51.451927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.254 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:32.254 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:11:32.254 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.254 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:32.254 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:32.254 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.254 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.516 [2024-11-20 06:22:52.325475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.516 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:32.777 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:32.777 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:33.038 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:33.039 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:33.300 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:33.561 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1f012976-0e25-4b00-9d7e-9830cb157481 00:11:33.561 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1f012976-0e25-4b00-9d7e-9830cb157481 lvol 20 00:11:33.561 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=78e54b63-2a37-4937-92fb-a6c644622136 00:11:33.561 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:33.822 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 78e54b63-2a37-4937-92fb-a6c644622136 00:11:34.083 06:22:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:34.083 [2024-11-20 06:22:53.992311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.343 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.343 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2522832 00:11:34.343 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:34.343 06:22:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:35.727 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 78e54b63-2a37-4937-92fb-a6c644622136 MY_SNAPSHOT 00:11:35.727 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=347bcb73-f13d-46e0-81cc-ce0f37c52ac2 00:11:35.727 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 78e54b63-2a37-4937-92fb-a6c644622136 30 00:11:35.728 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 347bcb73-f13d-46e0-81cc-ce0f37c52ac2 MY_CLONE 00:11:35.989 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1eca0bac-df1f-4928-b97d-f3425aadccef 00:11:35.989 06:22:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1eca0bac-df1f-4928-b97d-f3425aadccef 00:11:36.559 06:22:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2522832 00:11:46.566 Initializing NVMe Controllers 00:11:46.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:46.566 Controller IO queue size 128, less than required. 00:11:46.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:46.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:46.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:46.566 Initialization complete. Launching workers. 00:11:46.566 ======================================================== 00:11:46.566 Latency(us) 00:11:46.566 Device Information : IOPS MiB/s Average min max 00:11:46.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16334.00 63.80 7839.03 1483.84 52332.41 00:11:46.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17215.70 67.25 7437.05 626.12 43580.34 00:11:46.566 ======================================================== 00:11:46.566 Total : 33549.70 131.05 7632.75 626.12 52332.41 00:11:46.566 00:11:46.566 06:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:46.566 06:23:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 78e54b63-2a37-4937-92fb-a6c644622136 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f012976-0e25-4b00-9d7e-9830cb157481 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:46.566 rmmod nvme_tcp 00:11:46.566 rmmod nvme_fabrics 00:11:46.566 rmmod nvme_keyring 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2522131 ']' 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2522131 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2522131 ']' 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2522131 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2522131 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2522131' 00:11:46.566 killing process with pid 2522131 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2522131 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2522131 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:46.566 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:11:46.567 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:46.567 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:11:46.567 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.567 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:46.567 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.567 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.567 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.008 00:11:48.008 real 0m24.332s 00:11:48.008 user 1m5.796s 00:11:48.008 sys 0m8.752s 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:48.008 ************************************ 00:11:48.008 END TEST nvmf_lvol 00:11:48.008 ************************************ 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:48.008 ************************************ 00:11:48.008 START TEST nvmf_lvs_grow 00:11:48.008 ************************************ 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:48.008 * Looking for test storage... 00:11:48.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:11:48.008 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.306 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.307 --rc genhtml_branch_coverage=1 00:11:48.307 --rc genhtml_function_coverage=1 00:11:48.307 --rc genhtml_legend=1 00:11:48.307 --rc geninfo_all_blocks=1 00:11:48.307 --rc geninfo_unexecuted_blocks=1 00:11:48.307 00:11:48.307 ' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.307 --rc genhtml_branch_coverage=1 00:11:48.307 --rc genhtml_function_coverage=1 00:11:48.307 --rc genhtml_legend=1 00:11:48.307 --rc geninfo_all_blocks=1 00:11:48.307 --rc geninfo_unexecuted_blocks=1 00:11:48.307 00:11:48.307 ' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.307 --rc genhtml_branch_coverage=1 00:11:48.307 --rc genhtml_function_coverage=1 00:11:48.307 --rc genhtml_legend=1 00:11:48.307 --rc geninfo_all_blocks=1 00:11:48.307 --rc geninfo_unexecuted_blocks=1 00:11:48.307 00:11:48.307 ' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:48.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.307 --rc genhtml_branch_coverage=1 00:11:48.307 --rc genhtml_function_coverage=1 00:11:48.307 --rc genhtml_legend=1 00:11:48.307 --rc geninfo_all_blocks=1 00:11:48.307 --rc geninfo_unexecuted_blocks=1 00:11:48.307 00:11:48.307 ' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.307 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.307 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.307 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.307 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.307 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:56.450 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:56.450 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:56.450 Found net devices under 0000:31:00.0: cvl_0_0 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:56.450 Found net devices under 0000:31:00.1: cvl_0_1 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.450 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:11:56.451 00:11:56.451 --- 10.0.0.2 ping statistics --- 00:11:56.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.451 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:11:56.451 00:11:56.451 --- 10.0.0.1 ping statistics --- 00:11:56.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.451 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2529248 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2529248 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2529248 ']' 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:56.451 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.451 [2024-11-20 06:23:15.608321] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:11:56.451 [2024-11-20 06:23:15.608386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.451 [2024-11-20 06:23:15.705423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.451 [2024-11-20 06:23:15.759021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.451 [2024-11-20 06:23:15.759070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.451 [2024-11-20 06:23:15.759079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.451 [2024-11-20 06:23:15.759086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.451 [2024-11-20 06:23:15.759093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.451 [2024-11-20 06:23:15.759682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.712 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:56.712 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:11:56.712 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.712 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.712 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.712 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.712 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:56.712 [2024-11-20 06:23:16.627838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.973 ************************************ 00:11:56.973 START TEST lvs_grow_clean 00:11:56.973 ************************************ 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.973 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:57.233 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:57.233 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:57.233 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:11:57.233 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:11:57.233 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:57.494 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:57.494 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:57.494 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 lvol 150 00:11:57.755 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=23fff5b5-5eac-47ad-a4af-dea1d372d7f4 00:11:57.755 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:57.755 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:57.755 [2024-11-20 06:23:17.615517] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:57.755 [2024-11-20 06:23:17.615590] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:57.755 true 00:11:57.755 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:57.755 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:11:58.015 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:58.015 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:58.276 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 23fff5b5-5eac-47ad-a4af-dea1d372d7f4 00:11:58.276 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:58.536 [2024-11-20 06:23:18.321835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.536 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2529959 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2529959 /var/tmp/bdevperf.sock 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2529959 ']' 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:58.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:58.797 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:58.797 [2024-11-20 06:23:18.573438] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:11:58.797 [2024-11-20 06:23:18.573508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2529959 ] 00:11:58.797 [2024-11-20 06:23:18.665830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.059 [2024-11-20 06:23:18.718196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.630 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:59.630 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:11:59.630 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:59.891 Nvme0n1 00:11:59.891 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:00.153 [ 00:12:00.153 { 00:12:00.153 "name": "Nvme0n1", 00:12:00.153 "aliases": [ 00:12:00.153 "23fff5b5-5eac-47ad-a4af-dea1d372d7f4" 00:12:00.153 ], 00:12:00.153 "product_name": "NVMe disk", 00:12:00.153 "block_size": 4096, 00:12:00.153 "num_blocks": 38912, 00:12:00.153 "uuid": "23fff5b5-5eac-47ad-a4af-dea1d372d7f4", 00:12:00.153 "numa_id": 0, 00:12:00.153 "assigned_rate_limits": { 00:12:00.153 "rw_ios_per_sec": 0, 00:12:00.153 "rw_mbytes_per_sec": 0, 00:12:00.153 "r_mbytes_per_sec": 0, 00:12:00.153 "w_mbytes_per_sec": 0 00:12:00.153 }, 00:12:00.153 "claimed": false, 00:12:00.153 "zoned": false, 00:12:00.153 "supported_io_types": { 00:12:00.153 "read": true, 00:12:00.153 "write": true, 00:12:00.153 "unmap": true, 00:12:00.153 "flush": true, 00:12:00.153 "reset": true, 00:12:00.153 "nvme_admin": true, 00:12:00.153 "nvme_io": true, 00:12:00.153 "nvme_io_md": false, 00:12:00.153 "write_zeroes": true, 00:12:00.153 "zcopy": false, 00:12:00.153 "get_zone_info": false, 00:12:00.153 "zone_management": false, 00:12:00.153 "zone_append": false, 00:12:00.153 "compare": true, 00:12:00.153 "compare_and_write": true, 00:12:00.153 "abort": true, 00:12:00.153 "seek_hole": false, 00:12:00.153 "seek_data": false, 00:12:00.153 "copy": true, 00:12:00.153 "nvme_iov_md": false 00:12:00.153 }, 00:12:00.153 "memory_domains": [ 00:12:00.153 { 00:12:00.153 "dma_device_id": "system", 00:12:00.153 "dma_device_type": 1 00:12:00.153 } 00:12:00.153 ], 00:12:00.153 "driver_specific": { 00:12:00.153 "nvme": [ 00:12:00.153 { 00:12:00.153 "trid": { 00:12:00.153 "trtype": "TCP", 00:12:00.153 "adrfam": "IPv4", 00:12:00.153 "traddr": "10.0.0.2", 00:12:00.153 "trsvcid": "4420", 00:12:00.153 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:00.153 }, 00:12:00.153 "ctrlr_data": { 00:12:00.153 "cntlid": 1, 00:12:00.153 "vendor_id": "0x8086", 00:12:00.153 "model_number": "SPDK bdev Controller", 00:12:00.153 "serial_number": "SPDK0", 00:12:00.153 "firmware_revision": "25.01", 00:12:00.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:00.153 "oacs": { 00:12:00.153 "security": 0, 00:12:00.153 "format": 0, 00:12:00.153 "firmware": 0, 00:12:00.153 "ns_manage": 0 00:12:00.153 }, 00:12:00.153 "multi_ctrlr": true, 00:12:00.153 "ana_reporting": false 00:12:00.153 }, 00:12:00.153 "vs": { 00:12:00.153 "nvme_version": "1.3" 00:12:00.153 }, 00:12:00.153 "ns_data": { 00:12:00.153 "id": 1, 00:12:00.153 "can_share": true 00:12:00.153 } 00:12:00.153 } 00:12:00.153 ], 00:12:00.153 "mp_policy": "active_passive" 00:12:00.153 } 00:12:00.153 } 00:12:00.153 ] 00:12:00.153 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:00.153 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2530242 00:12:00.153 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:00.153 Running I/O for 10 seconds... 00:12:01.096 Latency(us) 00:12:01.096 [2024-11-20T05:23:21.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.096 Nvme0n1 : 1.00 25051.00 97.86 0.00 0.00 0.00 0.00 0.00 00:12:01.096 [2024-11-20T05:23:21.016Z] =================================================================================================================== 00:12:01.096 [2024-11-20T05:23:21.016Z] Total : 25051.00 97.86 0.00 0.00 0.00 0.00 0.00 00:12:01.096 00:12:02.037 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:02.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.037 Nvme0n1 : 2.00 25227.50 98.54 0.00 0.00 0.00 0.00 0.00 00:12:02.037 [2024-11-20T05:23:21.957Z] =================================================================================================================== 00:12:02.037 [2024-11-20T05:23:21.957Z] Total : 25227.50 98.54 0.00 0.00 0.00 0.00 0.00 00:12:02.037 00:12:02.297 true 00:12:02.297 06:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:02.297 06:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:02.558 06:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:02.558 06:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:02.558 06:23:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2530242 00:12:03.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.130 Nvme0n1 : 3.00 25312.67 98.88 0.00 0.00 0.00 0.00 0.00 00:12:03.130 [2024-11-20T05:23:23.050Z] =================================================================================================================== 00:12:03.130 [2024-11-20T05:23:23.050Z] Total : 25312.67 98.88 0.00 0.00 0.00 0.00 0.00 00:12:03.130 00:12:04.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.071 Nvme0n1 : 4.00 25368.00 99.09 0.00 0.00 0.00 0.00 0.00 00:12:04.071 [2024-11-20T05:23:23.991Z] =================================================================================================================== 00:12:04.071 [2024-11-20T05:23:23.991Z] Total : 25368.00 99.09 0.00 0.00 0.00 0.00 0.00 00:12:04.071 00:12:05.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.012 Nvme0n1 : 5.00 25414.00 99.27 0.00 0.00 0.00 0.00 0.00 00:12:05.012 [2024-11-20T05:23:24.932Z] =================================================================================================================== 00:12:05.012 [2024-11-20T05:23:24.932Z] Total : 25414.00 99.27 0.00 0.00 0.00 0.00 0.00 00:12:05.012 00:12:06.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.398 Nvme0n1 : 6.00 25445.00 99.39 0.00 0.00 0.00 0.00 0.00 00:12:06.398 [2024-11-20T05:23:26.318Z] =================================================================================================================== 00:12:06.398 [2024-11-20T05:23:26.318Z] Total : 25445.00 99.39 0.00 0.00 0.00 0.00 0.00 00:12:06.398 00:12:07.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.341 Nvme0n1 : 7.00 25467.00 99.48 0.00 0.00 0.00 0.00 0.00 00:12:07.341 [2024-11-20T05:23:27.261Z] =================================================================================================================== 00:12:07.341 [2024-11-20T05:23:27.261Z] Total : 25467.00 99.48 0.00 0.00 0.00 0.00 0.00 00:12:07.341 00:12:08.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.283 Nvme0n1 : 8.00 25491.38 99.58 0.00 0.00 0.00 0.00 0.00 00:12:08.283 [2024-11-20T05:23:28.203Z] =================================================================================================================== 00:12:08.283 [2024-11-20T05:23:28.203Z] Total : 25491.38 99.58 0.00 0.00 0.00 0.00 0.00 00:12:08.283 00:12:09.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.225 Nvme0n1 : 9.00 25503.44 99.62 0.00 0.00 0.00 0.00 0.00 00:12:09.225 [2024-11-20T05:23:29.145Z] =================================================================================================================== 00:12:09.225 [2024-11-20T05:23:29.145Z] Total : 25503.44 99.62 0.00 0.00 0.00 0.00 0.00 00:12:09.225 00:12:10.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.196 Nvme0n1 : 10.00 25519.40 99.69 0.00 0.00 0.00 0.00 0.00 00:12:10.196 [2024-11-20T05:23:30.116Z] =================================================================================================================== 00:12:10.196 [2024-11-20T05:23:30.116Z] Total : 25519.40 99.69 0.00 0.00 0.00 0.00 0.00 00:12:10.196 00:12:10.196 00:12:10.196 Latency(us) 00:12:10.196 [2024-11-20T05:23:30.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.196 Nvme0n1 : 10.00 25518.14 99.68 0.00 0.00 5012.40 2498.56 12615.68 00:12:10.196 [2024-11-20T05:23:30.116Z] =================================================================================================================== 00:12:10.196 [2024-11-20T05:23:30.116Z] Total : 25518.14 99.68 0.00 0.00 5012.40 2498.56 12615.68 00:12:10.196 { 00:12:10.196 "results": [ 00:12:10.196 { 00:12:10.196 "job": "Nvme0n1", 00:12:10.196 "core_mask": "0x2", 00:12:10.196 "workload": "randwrite", 00:12:10.196 "status": "finished", 00:12:10.196 "queue_depth": 128, 00:12:10.196 "io_size": 4096, 00:12:10.196 "runtime": 10.002963, 00:12:10.196 "iops": 25518.13897542158, 00:12:10.196 "mibps": 99.68023037274055, 00:12:10.196 "io_failed": 0, 00:12:10.196 "io_timeout": 0, 00:12:10.196 "avg_latency_us": 5012.3986639347795, 00:12:10.196 "min_latency_us": 2498.56, 00:12:10.196 "max_latency_us": 12615.68 00:12:10.196 } 00:12:10.196 ], 00:12:10.196 "core_count": 1 00:12:10.196 } 00:12:10.196 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2529959 00:12:10.196 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2529959 ']' 00:12:10.196 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2529959 00:12:10.196 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:12:10.196 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.196 06:23:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2529959 00:12:10.196 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:10.196 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:10.196 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2529959' 00:12:10.196 killing process with pid 2529959 00:12:10.196 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2529959 00:12:10.196 Received shutdown signal, test time was about 10.000000 seconds 00:12:10.196 00:12:10.196 Latency(us) 00:12:10.196 [2024-11-20T05:23:30.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.196 [2024-11-20T05:23:30.116Z] =================================================================================================================== 00:12:10.196 [2024-11-20T05:23:30.116Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:10.196 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2529959 00:12:10.457 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.457 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:10.717 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:10.717 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:10.978 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:10.978 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:10.978 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:10.978 [2024-11-20 06:23:30.808293] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:10.978 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:10.978 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:10.978 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:10.979 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:11.239 request: 00:12:11.239 { 00:12:11.239 "uuid": "58ac547c-465c-4c3d-9583-ae11a5fbbf32", 00:12:11.239 "method": "bdev_lvol_get_lvstores", 00:12:11.239 "req_id": 1 00:12:11.239 } 00:12:11.239 Got JSON-RPC error response 00:12:11.239 response: 00:12:11.239 { 00:12:11.239 "code": -19, 00:12:11.239 "message": "No such device" 00:12:11.239 } 00:12:11.239 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:11.239 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:11.239 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:11.239 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:11.239 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:11.499 aio_bdev 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 23fff5b5-5eac-47ad-a4af-dea1d372d7f4 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=23fff5b5-5eac-47ad-a4af-dea1d372d7f4 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:11.499 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 23fff5b5-5eac-47ad-a4af-dea1d372d7f4 -t 2000 00:12:11.759 [ 00:12:11.759 { 00:12:11.759 "name": "23fff5b5-5eac-47ad-a4af-dea1d372d7f4", 00:12:11.759 "aliases": [ 00:12:11.759 "lvs/lvol" 00:12:11.759 ], 00:12:11.759 "product_name": "Logical Volume", 00:12:11.759 "block_size": 4096, 00:12:11.759 "num_blocks": 38912, 00:12:11.759 "uuid": "23fff5b5-5eac-47ad-a4af-dea1d372d7f4", 00:12:11.759 "assigned_rate_limits": { 00:12:11.759 "rw_ios_per_sec": 0, 00:12:11.759 "rw_mbytes_per_sec": 0, 00:12:11.759 "r_mbytes_per_sec": 0, 00:12:11.759 "w_mbytes_per_sec": 0 00:12:11.759 }, 00:12:11.759 "claimed": false, 00:12:11.759 "zoned": false, 00:12:11.759 "supported_io_types": { 00:12:11.759 "read": true, 00:12:11.759 "write": true, 00:12:11.759 "unmap": true, 00:12:11.759 "flush": false, 00:12:11.759 "reset": true, 00:12:11.759 "nvme_admin": false, 00:12:11.759 "nvme_io": false, 00:12:11.759 "nvme_io_md": false, 00:12:11.759 "write_zeroes": true, 00:12:11.759 "zcopy": false, 00:12:11.759 "get_zone_info": false, 00:12:11.759 "zone_management": false, 00:12:11.759 "zone_append": false, 00:12:11.759 "compare": false, 00:12:11.759 "compare_and_write": false, 00:12:11.759 "abort": false, 00:12:11.759 "seek_hole": true, 00:12:11.759 "seek_data": true, 00:12:11.759 "copy": false, 00:12:11.759 "nvme_iov_md": false 00:12:11.759 }, 00:12:11.759 "driver_specific": { 00:12:11.759 "lvol": { 00:12:11.759 "lvol_store_uuid": "58ac547c-465c-4c3d-9583-ae11a5fbbf32", 00:12:11.759 "base_bdev": "aio_bdev", 00:12:11.759 "thin_provision": false, 00:12:11.759 "num_allocated_clusters": 38, 00:12:11.759 "snapshot": false, 00:12:11.759 "clone": false, 00:12:11.759 "esnap_clone": false 00:12:11.759 } 00:12:11.759 } 00:12:11.759 } 00:12:11.759 ] 00:12:11.759 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:12:11.759 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:11.759 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:12.020 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:12.020 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:12.020 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:12.020 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:12.020 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 23fff5b5-5eac-47ad-a4af-dea1d372d7f4 00:12:12.281 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 58ac547c-465c-4c3d-9583-ae11a5fbbf32 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.541 00:12:12.541 real 0m15.719s 00:12:12.541 user 0m15.431s 00:12:12.541 sys 0m1.436s 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 ************************************ 00:12:12.541 END TEST lvs_grow_clean 00:12:12.541 ************************************ 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.541 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:12.802 ************************************ 00:12:12.802 START TEST lvs_grow_dirty 00:12:12.802 ************************************ 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:12.802 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:13.064 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:13.064 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:13.064 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:13.324 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:13.324 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:13.324 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fcc7a038-a7b7-432b-a5a5-28248559984e lvol 150 00:12:13.324 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d5bce152-a215-411f-a125-3d72662be045 00:12:13.324 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:13.324 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:13.585 [2024-11-20 06:23:33.332788] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:13.585 [2024-11-20 06:23:33.332828] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:13.585 true 00:12:13.585 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:13.585 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:13.847 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:13.847 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:13.847 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d5bce152-a215-411f-a125-3d72662be045 00:12:14.108 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:14.108 [2024-11-20 06:23:34.022773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2533052 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2533052 /var/tmp/bdevperf.sock 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2533052 ']' 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:14.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:14.369 06:23:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:14.369 [2024-11-20 06:23:34.256779] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:14.369 [2024-11-20 06:23:34.256830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2533052 ] 00:12:14.630 [2024-11-20 06:23:34.338705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.630 [2024-11-20 06:23:34.368262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.201 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:15.202 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:12:15.202 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:15.774 Nvme0n1 00:12:15.774 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:15.774 [ 00:12:15.774 { 00:12:15.774 "name": "Nvme0n1", 00:12:15.774 "aliases": [ 00:12:15.774 "d5bce152-a215-411f-a125-3d72662be045" 00:12:15.774 ], 00:12:15.774 "product_name": "NVMe disk", 00:12:15.774 "block_size": 4096, 00:12:15.774 "num_blocks": 38912, 00:12:15.774 "uuid": "d5bce152-a215-411f-a125-3d72662be045", 00:12:15.774 "numa_id": 0, 00:12:15.774 "assigned_rate_limits": { 00:12:15.774 "rw_ios_per_sec": 0, 00:12:15.774 "rw_mbytes_per_sec": 0, 00:12:15.774 "r_mbytes_per_sec": 0, 00:12:15.774 "w_mbytes_per_sec": 0 00:12:15.774 }, 00:12:15.774 "claimed": false, 00:12:15.774 "zoned": false, 00:12:15.774 "supported_io_types": { 00:12:15.774 "read": true, 00:12:15.774 "write": true, 00:12:15.774 "unmap": true, 00:12:15.774 "flush": true, 00:12:15.774 "reset": true, 00:12:15.774 "nvme_admin": true, 00:12:15.774 "nvme_io": true, 00:12:15.774 "nvme_io_md": false, 00:12:15.774 "write_zeroes": true, 00:12:15.774 "zcopy": false, 00:12:15.774 "get_zone_info": false, 00:12:15.774 "zone_management": false, 00:12:15.774 "zone_append": false, 00:12:15.774 "compare": true, 00:12:15.774 "compare_and_write": true, 00:12:15.774 "abort": true, 00:12:15.774 "seek_hole": false, 00:12:15.774 "seek_data": false, 00:12:15.774 "copy": true, 00:12:15.774 "nvme_iov_md": false 00:12:15.774 }, 00:12:15.774 "memory_domains": [ 00:12:15.774 { 00:12:15.774 "dma_device_id": "system", 00:12:15.774 "dma_device_type": 1 00:12:15.774 } 00:12:15.774 ], 00:12:15.774 "driver_specific": { 00:12:15.774 "nvme": [ 00:12:15.774 { 00:12:15.774 "trid": { 00:12:15.774 "trtype": "TCP", 00:12:15.774 "adrfam": "IPv4", 00:12:15.774 "traddr": "10.0.0.2", 00:12:15.774 "trsvcid": "4420", 00:12:15.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:15.774 }, 00:12:15.774 "ctrlr_data": { 00:12:15.774 "cntlid": 1, 00:12:15.774 "vendor_id": "0x8086", 00:12:15.774 "model_number": "SPDK bdev Controller", 00:12:15.774 "serial_number": "SPDK0", 00:12:15.774 "firmware_revision": "25.01", 00:12:15.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:15.774 "oacs": { 00:12:15.774 "security": 0, 00:12:15.774 "format": 0, 00:12:15.774 "firmware": 0, 00:12:15.774 "ns_manage": 0 00:12:15.774 }, 00:12:15.774 "multi_ctrlr": true, 00:12:15.774 "ana_reporting": false 00:12:15.774 }, 00:12:15.774 "vs": { 00:12:15.774 "nvme_version": "1.3" 00:12:15.774 }, 00:12:15.774 "ns_data": { 00:12:15.774 "id": 1, 00:12:15.774 "can_share": true 00:12:15.774 } 00:12:15.774 } 00:12:15.774 ], 00:12:15.774 "mp_policy": "active_passive" 00:12:15.774 } 00:12:15.774 } 00:12:15.774 ] 00:12:15.774 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2533396 00:12:15.774 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:15.774 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:16.035 Running I/O for 10 seconds... 00:12:16.978 Latency(us) 00:12:16.978 [2024-11-20T05:23:36.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.978 Nvme0n1 : 1.00 25174.00 98.34 0.00 0.00 0.00 0.00 0.00 00:12:16.978 [2024-11-20T05:23:36.898Z] =================================================================================================================== 00:12:16.978 [2024-11-20T05:23:36.898Z] Total : 25174.00 98.34 0.00 0.00 0.00 0.00 0.00 00:12:16.978 00:12:17.921 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:17.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.921 Nvme0n1 : 2.00 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:12:17.921 [2024-11-20T05:23:37.841Z] =================================================================================================================== 00:12:17.921 [2024-11-20T05:23:37.841Z] Total : 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:12:17.921 00:12:17.921 true 00:12:17.921 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:17.921 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:18.182 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:18.182 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:18.182 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2533396 00:12:19.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.125 Nvme0n1 : 3.00 25413.00 99.27 0.00 0.00 0.00 0.00 0.00 00:12:19.125 [2024-11-20T05:23:39.045Z] =================================================================================================================== 00:12:19.125 [2024-11-20T05:23:39.045Z] Total : 25413.00 99.27 0.00 0.00 0.00 0.00 0.00 00:12:19.125 00:12:20.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.068 Nvme0n1 : 4.00 25459.50 99.45 0.00 0.00 0.00 0.00 0.00 00:12:20.068 [2024-11-20T05:23:39.988Z] =================================================================================================================== 00:12:20.068 [2024-11-20T05:23:39.988Z] Total : 25459.50 99.45 0.00 0.00 0.00 0.00 0.00 00:12:20.068 00:12:21.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.011 Nvme0n1 : 5.00 25487.00 99.56 0.00 0.00 0.00 0.00 0.00 00:12:21.011 [2024-11-20T05:23:40.931Z] =================================================================================================================== 00:12:21.011 [2024-11-20T05:23:40.931Z] Total : 25487.00 99.56 0.00 0.00 0.00 0.00 0.00 00:12:21.011 00:12:21.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.953 Nvme0n1 : 6.00 25516.33 99.67 0.00 0.00 0.00 0.00 0.00 00:12:21.953 [2024-11-20T05:23:41.873Z] =================================================================================================================== 00:12:21.953 [2024-11-20T05:23:41.873Z] Total : 25516.33 99.67 0.00 0.00 0.00 0.00 0.00 00:12:21.953 00:12:22.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.896 Nvme0n1 : 7.00 25537.29 99.76 0.00 0.00 0.00 0.00 0.00 00:12:22.896 [2024-11-20T05:23:42.816Z] =================================================================================================================== 00:12:22.896 [2024-11-20T05:23:42.816Z] Total : 25537.29 99.76 0.00 0.00 0.00 0.00 0.00 00:12:22.896 00:12:23.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.839 Nvme0n1 : 8.00 25553.25 99.82 0.00 0.00 0.00 0.00 0.00 00:12:23.839 [2024-11-20T05:23:43.759Z] =================================================================================================================== 00:12:23.839 [2024-11-20T05:23:43.759Z] Total : 25553.25 99.82 0.00 0.00 0.00 0.00 0.00 00:12:23.839 00:12:25.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.223 Nvme0n1 : 9.00 25572.44 99.89 0.00 0.00 0.00 0.00 0.00 00:12:25.223 [2024-11-20T05:23:45.143Z] =================================================================================================================== 00:12:25.223 [2024-11-20T05:23:45.143Z] Total : 25572.44 99.89 0.00 0.00 0.00 0.00 0.00 00:12:25.223 00:12:26.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.165 Nvme0n1 : 10.00 25581.50 99.93 0.00 0.00 0.00 0.00 0.00 00:12:26.165 [2024-11-20T05:23:46.085Z] =================================================================================================================== 00:12:26.165 [2024-11-20T05:23:46.085Z] Total : 25581.50 99.93 0.00 0.00 0.00 0.00 0.00 00:12:26.165 00:12:26.165 00:12:26.165 Latency(us) 00:12:26.165 [2024-11-20T05:23:46.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.165 Nvme0n1 : 10.00 25583.82 99.94 0.00 0.00 5000.31 3072.00 10868.05 00:12:26.165 [2024-11-20T05:23:46.085Z] =================================================================================================================== 00:12:26.165 [2024-11-20T05:23:46.085Z] Total : 25583.82 99.94 0.00 0.00 5000.31 3072.00 10868.05 00:12:26.165 { 00:12:26.165 "results": [ 00:12:26.165 { 00:12:26.165 "job": "Nvme0n1", 00:12:26.165 "core_mask": "0x2", 00:12:26.165 "workload": "randwrite", 00:12:26.165 "status": "finished", 00:12:26.165 "queue_depth": 128, 00:12:26.165 "io_size": 4096, 00:12:26.165 "runtime": 10.004098, 00:12:26.165 "iops": 25583.815752304705, 00:12:26.165 "mibps": 99.93678028244025, 00:12:26.165 "io_failed": 0, 00:12:26.165 "io_timeout": 0, 00:12:26.165 "avg_latency_us": 5000.3116706454175, 00:12:26.165 "min_latency_us": 3072.0, 00:12:26.165 "max_latency_us": 10868.053333333333 00:12:26.165 } 00:12:26.165 ], 00:12:26.165 "core_count": 1 00:12:26.165 } 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2533052 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2533052 ']' 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2533052 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2533052 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2533052' 00:12:26.165 killing process with pid 2533052 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2533052 00:12:26.165 Received shutdown signal, test time was about 10.000000 seconds 00:12:26.165 00:12:26.165 Latency(us) 00:12:26.165 [2024-11-20T05:23:46.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.165 [2024-11-20T05:23:46.085Z] =================================================================================================================== 00:12:26.165 [2024-11-20T05:23:46.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2533052 00:12:26.165 06:23:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.425 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:26.425 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:26.425 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2529248 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2529248 00:12:26.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2529248 Killed "${NVMF_APP[@]}" "$@" 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2535426 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2535426 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2535426 ']' 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.686 06:23:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:26.686 [2024-11-20 06:23:46.576107] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:26.686 [2024-11-20 06:23:46.576165] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.947 [2024-11-20 06:23:46.665613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.948 [2024-11-20 06:23:46.695649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.948 [2024-11-20 06:23:46.695677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.948 [2024-11-20 06:23:46.695686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.948 [2024-11-20 06:23:46.695691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.948 [2024-11-20 06:23:46.695695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.948 [2024-11-20 06:23:46.696154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.518 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.518 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:12:27.518 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.518 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.518 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:27.518 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.518 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:27.779 [2024-11-20 06:23:47.555076] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:27.779 [2024-11-20 06:23:47.555149] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:27.779 [2024-11-20 06:23:47.555171] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d5bce152-a215-411f-a125-3d72662be045 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=d5bce152-a215-411f-a125-3d72662be045 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:27.779 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:28.039 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d5bce152-a215-411f-a125-3d72662be045 -t 2000 00:12:28.039 [ 00:12:28.039 { 00:12:28.039 "name": "d5bce152-a215-411f-a125-3d72662be045", 00:12:28.039 "aliases": [ 00:12:28.039 "lvs/lvol" 00:12:28.039 ], 00:12:28.039 "product_name": "Logical Volume", 00:12:28.039 "block_size": 4096, 00:12:28.039 "num_blocks": 38912, 00:12:28.039 "uuid": "d5bce152-a215-411f-a125-3d72662be045", 00:12:28.039 "assigned_rate_limits": { 00:12:28.039 "rw_ios_per_sec": 0, 00:12:28.039 "rw_mbytes_per_sec": 0, 00:12:28.039 "r_mbytes_per_sec": 0, 00:12:28.039 "w_mbytes_per_sec": 0 00:12:28.039 }, 00:12:28.039 "claimed": false, 00:12:28.039 "zoned": false, 00:12:28.039 "supported_io_types": { 00:12:28.039 "read": true, 00:12:28.039 "write": true, 00:12:28.039 "unmap": true, 00:12:28.039 "flush": false, 00:12:28.039 "reset": true, 00:12:28.039 "nvme_admin": false, 00:12:28.039 "nvme_io": false, 00:12:28.039 "nvme_io_md": false, 00:12:28.039 "write_zeroes": true, 00:12:28.039 "zcopy": false, 00:12:28.039 "get_zone_info": false, 00:12:28.039 "zone_management": false, 00:12:28.039 "zone_append": false, 00:12:28.039 "compare": false, 00:12:28.039 "compare_and_write": false, 00:12:28.039 "abort": false, 00:12:28.039 "seek_hole": true, 00:12:28.039 "seek_data": true, 00:12:28.039 "copy": false, 00:12:28.039 "nvme_iov_md": false 00:12:28.039 }, 00:12:28.039 "driver_specific": { 00:12:28.039 "lvol": { 00:12:28.039 "lvol_store_uuid": "fcc7a038-a7b7-432b-a5a5-28248559984e", 00:12:28.039 "base_bdev": "aio_bdev", 00:12:28.039 "thin_provision": false, 00:12:28.039 "num_allocated_clusters": 38, 00:12:28.039 "snapshot": false, 00:12:28.039 "clone": false, 00:12:28.039 "esnap_clone": false 00:12:28.039 } 00:12:28.039 } 00:12:28.039 } 00:12:28.039 ] 00:12:28.039 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:12:28.039 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:28.039 06:23:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:28.299 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:28.299 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:28.299 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.559 [2024-11-20 06:23:48.391685] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.559 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.560 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.560 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.560 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:28.560 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:28.852 request: 00:12:28.852 { 00:12:28.852 "uuid": "fcc7a038-a7b7-432b-a5a5-28248559984e", 00:12:28.852 "method": "bdev_lvol_get_lvstores", 00:12:28.852 "req_id": 1 00:12:28.852 } 00:12:28.852 Got JSON-RPC error response 00:12:28.852 response: 00:12:28.852 { 00:12:28.852 "code": -19, 00:12:28.852 "message": "No such device" 00:12:28.852 } 00:12:28.852 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:12:28.852 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.852 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.852 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.852 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:29.114 aio_bdev 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d5bce152-a215-411f-a125-3d72662be045 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=d5bce152-a215-411f-a125-3d72662be045 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:29.114 06:23:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d5bce152-a215-411f-a125-3d72662be045 -t 2000 00:12:29.384 [ 00:12:29.384 { 00:12:29.384 "name": "d5bce152-a215-411f-a125-3d72662be045", 00:12:29.384 "aliases": [ 00:12:29.384 "lvs/lvol" 00:12:29.384 ], 00:12:29.384 "product_name": "Logical Volume", 00:12:29.384 "block_size": 4096, 00:12:29.384 "num_blocks": 38912, 00:12:29.384 "uuid": "d5bce152-a215-411f-a125-3d72662be045", 00:12:29.384 "assigned_rate_limits": { 00:12:29.384 "rw_ios_per_sec": 0, 00:12:29.384 "rw_mbytes_per_sec": 0, 00:12:29.384 "r_mbytes_per_sec": 0, 00:12:29.384 "w_mbytes_per_sec": 0 00:12:29.384 }, 00:12:29.384 "claimed": false, 00:12:29.384 "zoned": false, 00:12:29.384 "supported_io_types": { 00:12:29.384 "read": true, 00:12:29.384 "write": true, 00:12:29.384 "unmap": true, 00:12:29.384 "flush": false, 00:12:29.384 "reset": true, 00:12:29.384 "nvme_admin": false, 00:12:29.384 "nvme_io": false, 00:12:29.384 "nvme_io_md": false, 00:12:29.384 "write_zeroes": true, 00:12:29.384 "zcopy": false, 00:12:29.384 "get_zone_info": false, 00:12:29.384 "zone_management": false, 00:12:29.384 "zone_append": false, 00:12:29.384 "compare": false, 00:12:29.384 "compare_and_write": false, 00:12:29.384 "abort": false, 00:12:29.384 "seek_hole": true, 00:12:29.384 "seek_data": true, 00:12:29.384 "copy": false, 00:12:29.384 "nvme_iov_md": false 00:12:29.384 }, 00:12:29.384 "driver_specific": { 00:12:29.384 "lvol": { 00:12:29.384 "lvol_store_uuid": "fcc7a038-a7b7-432b-a5a5-28248559984e", 00:12:29.384 "base_bdev": "aio_bdev", 00:12:29.384 "thin_provision": false, 00:12:29.384 "num_allocated_clusters": 38, 00:12:29.384 "snapshot": false, 00:12:29.384 "clone": false, 00:12:29.384 "esnap_clone": false 00:12:29.384 } 00:12:29.384 } 00:12:29.384 } 00:12:29.384 ] 00:12:29.384 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:12:29.384 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:29.384 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:29.657 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:29.657 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:29.657 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:29.657 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:29.657 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d5bce152-a215-411f-a125-3d72662be045 00:12:29.917 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fcc7a038-a7b7-432b-a5a5-28248559984e 00:12:30.177 06:23:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:30.177 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:30.177 00:12:30.177 real 0m17.594s 00:12:30.177 user 0m45.867s 00:12:30.177 sys 0m3.034s 00:12:30.177 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:30.177 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:30.177 ************************************ 00:12:30.177 END TEST lvs_grow_dirty 00:12:30.177 ************************************ 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:30.438 nvmf_trace.0 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.438 rmmod nvme_tcp 00:12:30.438 rmmod nvme_fabrics 00:12:30.438 rmmod nvme_keyring 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2535426 ']' 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2535426 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2535426 ']' 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2535426 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2535426 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2535426' 00:12:30.438 killing process with pid 2535426 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2535426 00:12:30.438 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2535426 00:12:30.698 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.698 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.699 06:23:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.611 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.611 00:12:32.611 real 0m44.735s 00:12:32.611 user 1m7.692s 00:12:32.611 sys 0m10.661s 00:12:32.611 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:32.611 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:32.611 ************************************ 00:12:32.611 END TEST nvmf_lvs_grow 00:12:32.611 ************************************ 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:32.873 ************************************ 00:12:32.873 START TEST nvmf_bdev_io_wait 00:12:32.873 ************************************ 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:32.873 * Looking for test storage... 00:12:32.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:32.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.873 --rc genhtml_branch_coverage=1 00:12:32.873 --rc genhtml_function_coverage=1 00:12:32.873 --rc genhtml_legend=1 00:12:32.873 --rc geninfo_all_blocks=1 00:12:32.873 --rc geninfo_unexecuted_blocks=1 00:12:32.873 00:12:32.873 ' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:32.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.873 --rc genhtml_branch_coverage=1 00:12:32.873 --rc genhtml_function_coverage=1 00:12:32.873 --rc genhtml_legend=1 00:12:32.873 --rc geninfo_all_blocks=1 00:12:32.873 --rc geninfo_unexecuted_blocks=1 00:12:32.873 00:12:32.873 ' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:32.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.873 --rc genhtml_branch_coverage=1 00:12:32.873 --rc genhtml_function_coverage=1 00:12:32.873 --rc genhtml_legend=1 00:12:32.873 --rc geninfo_all_blocks=1 00:12:32.873 --rc geninfo_unexecuted_blocks=1 00:12:32.873 00:12:32.873 ' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:32.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.873 --rc genhtml_branch_coverage=1 00:12:32.873 --rc genhtml_function_coverage=1 00:12:32.873 --rc genhtml_legend=1 00:12:32.873 --rc geninfo_all_blocks=1 00:12:32.873 --rc geninfo_unexecuted_blocks=1 00:12:32.873 00:12:32.873 ' 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.873 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.135 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:33.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.136 06:23:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.409 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:41.410 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:41.410 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:41.410 Found net devices under 0000:31:00.0: cvl_0_0 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:41.410 Found net devices under 0000:31:00.1: cvl_0_1 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:12:41.410 00:12:41.410 --- 10.0.0.2 ping statistics --- 00:12:41.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.410 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:12:41.410 00:12:41.410 --- 10.0.0.1 ping statistics --- 00:12:41.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.410 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2540578 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2540578 00:12:41.410 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:41.411 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2540578 ']' 00:12:41.411 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.411 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:41.411 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.411 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:41.411 06:24:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.411 [2024-11-20 06:24:00.565659] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:41.411 [2024-11-20 06:24:00.565723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.411 [2024-11-20 06:24:00.667986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.411 [2024-11-20 06:24:00.725404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.411 [2024-11-20 06:24:00.725458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.411 [2024-11-20 06:24:00.725467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.411 [2024-11-20 06:24:00.725474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.411 [2024-11-20 06:24:00.725481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.411 [2024-11-20 06:24:00.727736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.411 [2024-11-20 06:24:00.727910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.411 [2024-11-20 06:24:00.728213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.411 [2024-11-20 06:24:00.728216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 [2024-11-20 06:24:01.535530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 Malloc0 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:41.933 [2024-11-20 06:24:01.601173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2540980 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2540982 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.933 { 00:12:41.933 "params": { 00:12:41.933 "name": "Nvme$subsystem", 00:12:41.933 "trtype": "$TEST_TRANSPORT", 00:12:41.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.933 "adrfam": "ipv4", 00:12:41.933 "trsvcid": "$NVMF_PORT", 00:12:41.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.933 "hdgst": ${hdgst:-false}, 00:12:41.933 "ddgst": ${ddgst:-false} 00:12:41.933 }, 00:12:41.933 "method": "bdev_nvme_attach_controller" 00:12:41.933 } 00:12:41.933 EOF 00:12:41.933 )") 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2540984 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:41.933 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.934 { 00:12:41.934 "params": { 00:12:41.934 "name": "Nvme$subsystem", 00:12:41.934 "trtype": "$TEST_TRANSPORT", 00:12:41.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.934 "adrfam": "ipv4", 00:12:41.934 "trsvcid": "$NVMF_PORT", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.934 "hdgst": ${hdgst:-false}, 00:12:41.934 "ddgst": ${ddgst:-false} 00:12:41.934 }, 00:12:41.934 "method": "bdev_nvme_attach_controller" 00:12:41.934 } 00:12:41.934 EOF 00:12:41.934 )") 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2540987 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.934 { 00:12:41.934 "params": { 00:12:41.934 "name": "Nvme$subsystem", 00:12:41.934 "trtype": "$TEST_TRANSPORT", 00:12:41.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.934 "adrfam": "ipv4", 00:12:41.934 "trsvcid": "$NVMF_PORT", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.934 "hdgst": ${hdgst:-false}, 00:12:41.934 "ddgst": ${ddgst:-false} 00:12:41.934 }, 00:12:41.934 "method": "bdev_nvme_attach_controller" 00:12:41.934 } 00:12:41.934 EOF 00:12:41.934 )") 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:41.934 { 00:12:41.934 "params": { 00:12:41.934 "name": "Nvme$subsystem", 00:12:41.934 "trtype": "$TEST_TRANSPORT", 00:12:41.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.934 "adrfam": "ipv4", 00:12:41.934 "trsvcid": "$NVMF_PORT", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.934 "hdgst": ${hdgst:-false}, 00:12:41.934 "ddgst": ${ddgst:-false} 00:12:41.934 }, 00:12:41.934 "method": "bdev_nvme_attach_controller" 00:12:41.934 } 00:12:41.934 EOF 00:12:41.934 )") 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2540980 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.934 "params": { 00:12:41.934 "name": "Nvme1", 00:12:41.934 "trtype": "tcp", 00:12:41.934 "traddr": "10.0.0.2", 00:12:41.934 "adrfam": "ipv4", 00:12:41.934 "trsvcid": "4420", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.934 "hdgst": false, 00:12:41.934 "ddgst": false 00:12:41.934 }, 00:12:41.934 "method": "bdev_nvme_attach_controller" 00:12:41.934 }' 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.934 "params": { 00:12:41.934 "name": "Nvme1", 00:12:41.934 "trtype": "tcp", 00:12:41.934 "traddr": "10.0.0.2", 00:12:41.934 "adrfam": "ipv4", 00:12:41.934 "trsvcid": "4420", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.934 "hdgst": false, 00:12:41.934 "ddgst": false 00:12:41.934 }, 00:12:41.934 "method": "bdev_nvme_attach_controller" 00:12:41.934 }' 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.934 "params": { 00:12:41.934 "name": "Nvme1", 00:12:41.934 "trtype": "tcp", 00:12:41.934 "traddr": "10.0.0.2", 00:12:41.934 "adrfam": "ipv4", 00:12:41.934 "trsvcid": "4420", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.934 "hdgst": false, 00:12:41.934 "ddgst": false 00:12:41.934 }, 00:12:41.934 "method": "bdev_nvme_attach_controller" 00:12:41.934 }' 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:12:41.934 06:24:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:41.934 "params": { 00:12:41.934 "name": "Nvme1", 00:12:41.934 "trtype": "tcp", 00:12:41.934 "traddr": "10.0.0.2", 00:12:41.934 "adrfam": "ipv4", 00:12:41.934 "trsvcid": "4420", 00:12:41.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:41.934 "hdgst": false, 00:12:41.934 "ddgst": false 00:12:41.934 }, 00:12:41.934 "method": "bdev_nvme_attach_controller" 00:12:41.934 }' 00:12:41.934 [2024-11-20 06:24:01.661731] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:41.934 [2024-11-20 06:24:01.661731] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:41.934 [2024-11-20 06:24:01.661814] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-11-20 06:24:01.661816] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:12:41.934 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:41.935 [2024-11-20 06:24:01.662454] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:41.935 [2024-11-20 06:24:01.662508] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:41.935 [2024-11-20 06:24:01.676183] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:41.935 [2024-11-20 06:24:01.676251] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:42.195 [2024-11-20 06:24:01.879191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.195 [2024-11-20 06:24:01.917981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:12:42.195 [2024-11-20 06:24:01.971157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.195 [2024-11-20 06:24:02.011539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:42.195 [2024-11-20 06:24:02.064761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.195 [2024-11-20 06:24:02.107779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:42.456 [2024-11-20 06:24:02.121559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.456 [2024-11-20 06:24:02.159086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:42.456 Running I/O for 1 seconds... 00:12:42.456 Running I/O for 1 seconds... 00:12:42.717 Running I/O for 1 seconds... 00:12:42.717 Running I/O for 1 seconds... 00:12:43.660 7033.00 IOPS, 27.47 MiB/s 00:12:43.660 Latency(us) 00:12:43.660 [2024-11-20T05:24:03.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.660 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:43.660 Nvme1n1 : 1.02 7049.06 27.54 0.00 0.00 17982.07 4341.76 30146.56 00:12:43.660 [2024-11-20T05:24:03.580Z] =================================================================================================================== 00:12:43.660 [2024-11-20T05:24:03.580Z] Total : 7049.06 27.54 0.00 0.00 17982.07 4341.76 30146.56 00:12:43.660 11625.00 IOPS, 45.41 MiB/s 00:12:43.660 Latency(us) 00:12:43.660 [2024-11-20T05:24:03.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.660 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:43.660 Nvme1n1 : 1.01 11678.73 45.62 0.00 0.00 10918.18 5843.63 21626.88 00:12:43.660 [2024-11-20T05:24:03.580Z] =================================================================================================================== 00:12:43.660 [2024-11-20T05:24:03.580Z] Total : 11678.73 45.62 0.00 0.00 10918.18 5843.63 21626.88 00:12:43.660 7010.00 IOPS, 27.38 MiB/s 00:12:43.660 Latency(us) 00:12:43.660 [2024-11-20T05:24:03.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.660 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:43.660 Nvme1n1 : 1.01 7099.83 27.73 0.00 0.00 17979.51 3822.93 41506.13 00:12:43.660 [2024-11-20T05:24:03.580Z] =================================================================================================================== 00:12:43.660 [2024-11-20T05:24:03.580Z] Total : 7099.83 27.73 0.00 0.00 17979.51 3822.93 41506.13 00:12:43.660 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2540982 00:12:43.660 182728.00 IOPS, 713.78 MiB/s 00:12:43.660 Latency(us) 00:12:43.660 [2024-11-20T05:24:03.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.660 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:43.660 Nvme1n1 : 1.00 182364.89 712.36 0.00 0.00 697.89 300.37 1979.73 00:12:43.660 [2024-11-20T05:24:03.580Z] =================================================================================================================== 00:12:43.660 [2024-11-20T05:24:03.580Z] Total : 182364.89 712.36 0.00 0.00 697.89 300.37 1979.73 00:12:43.660 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2540984 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2540987 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.920 rmmod nvme_tcp 00:12:43.920 rmmod nvme_fabrics 00:12:43.920 rmmod nvme_keyring 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2540578 ']' 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2540578 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2540578 ']' 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2540578 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2540578 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2540578' 00:12:43.920 killing process with pid 2540578 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2540578 00:12:43.920 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2540578 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.180 06:24:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.096 06:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.096 00:12:46.096 real 0m13.366s 00:12:46.096 user 0m20.172s 00:12:46.096 sys 0m7.559s 00:12:46.096 06:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:46.096 06:24:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:46.096 ************************************ 00:12:46.096 END TEST nvmf_bdev_io_wait 00:12:46.096 ************************************ 00:12:46.096 06:24:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:46.096 06:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:46.096 06:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:46.096 06:24:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:46.358 ************************************ 00:12:46.358 START TEST nvmf_queue_depth 00:12:46.358 ************************************ 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:46.358 * Looking for test storage... 00:12:46.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:46.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.358 --rc genhtml_branch_coverage=1 00:12:46.358 --rc genhtml_function_coverage=1 00:12:46.358 --rc genhtml_legend=1 00:12:46.358 --rc geninfo_all_blocks=1 00:12:46.358 --rc geninfo_unexecuted_blocks=1 00:12:46.358 00:12:46.358 ' 00:12:46.358 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:46.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.359 --rc genhtml_branch_coverage=1 00:12:46.359 --rc genhtml_function_coverage=1 00:12:46.359 --rc genhtml_legend=1 00:12:46.359 --rc geninfo_all_blocks=1 00:12:46.359 --rc geninfo_unexecuted_blocks=1 00:12:46.359 00:12:46.359 ' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:46.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.359 --rc genhtml_branch_coverage=1 00:12:46.359 --rc genhtml_function_coverage=1 00:12:46.359 --rc genhtml_legend=1 00:12:46.359 --rc geninfo_all_blocks=1 00:12:46.359 --rc geninfo_unexecuted_blocks=1 00:12:46.359 00:12:46.359 ' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:46.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.359 --rc genhtml_branch_coverage=1 00:12:46.359 --rc genhtml_function_coverage=1 00:12:46.359 --rc genhtml_legend=1 00:12:46.359 --rc geninfo_all_blocks=1 00:12:46.359 --rc geninfo_unexecuted_blocks=1 00:12:46.359 00:12:46.359 ' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.359 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.621 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.621 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.621 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.621 06:24:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:54.766 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:54.766 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:54.766 Found net devices under 0000:31:00.0: cvl_0_0 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.766 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:54.767 Found net devices under 0000:31:00.1: cvl_0_1 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:12:54.767 00:12:54.767 --- 10.0.0.2 ping statistics --- 00:12:54.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.767 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:12:54.767 00:12:54.767 --- 10.0.0.1 ping statistics --- 00:12:54.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.767 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2546201 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2546201 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2546201 ']' 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.767 06:24:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:54.767 [2024-11-20 06:24:14.047326] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:54.767 [2024-11-20 06:24:14.047394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.767 [2024-11-20 06:24:14.150537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.767 [2024-11-20 06:24:14.202456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.767 [2024-11-20 06:24:14.202507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.767 [2024-11-20 06:24:14.202516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.767 [2024-11-20 06:24:14.202523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.767 [2024-11-20 06:24:14.202529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.767 [2024-11-20 06:24:14.203349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.030 [2024-11-20 06:24:14.928690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.030 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.291 Malloc0 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.291 [2024-11-20 06:24:14.990008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2546374 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2546374 /var/tmp/bdevperf.sock 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2546374 ']' 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:55.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:55.291 06:24:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.291 [2024-11-20 06:24:15.058734] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:12:55.292 [2024-11-20 06:24:15.058807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546374 ] 00:12:55.292 [2024-11-20 06:24:15.152429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.292 [2024-11-20 06:24:15.206145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.235 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:56.235 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:12:56.235 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:56.235 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.235 06:24:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:56.235 NVMe0n1 00:12:56.235 06:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.235 06:24:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:56.495 Running I/O for 10 seconds... 00:12:58.378 9257.00 IOPS, 36.16 MiB/s [2024-11-20T05:24:19.241Z] 10459.50 IOPS, 40.86 MiB/s [2024-11-20T05:24:20.625Z] 10922.67 IOPS, 42.67 MiB/s [2024-11-20T05:24:21.569Z] 11056.25 IOPS, 43.19 MiB/s [2024-11-20T05:24:22.513Z] 11472.80 IOPS, 44.82 MiB/s [2024-11-20T05:24:23.455Z] 11776.67 IOPS, 46.00 MiB/s [2024-11-20T05:24:24.396Z] 11996.00 IOPS, 46.86 MiB/s [2024-11-20T05:24:25.338Z] 12166.50 IOPS, 47.53 MiB/s [2024-11-20T05:24:26.281Z] 12343.89 IOPS, 48.22 MiB/s [2024-11-20T05:24:26.542Z] 12489.70 IOPS, 48.79 MiB/s 00:13:06.622 Latency(us) 00:13:06.622 [2024-11-20T05:24:26.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.622 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:06.622 Verification LBA range: start 0x0 length 0x4000 00:13:06.622 NVMe0n1 : 10.06 12509.78 48.87 0.00 0.00 81593.56 24794.45 73400.32 00:13:06.622 [2024-11-20T05:24:26.542Z] =================================================================================================================== 00:13:06.622 [2024-11-20T05:24:26.542Z] Total : 12509.78 48.87 0.00 0.00 81593.56 24794.45 73400.32 00:13:06.622 { 00:13:06.622 "results": [ 00:13:06.622 { 00:13:06.622 "job": "NVMe0n1", 00:13:06.622 "core_mask": "0x1", 00:13:06.622 "workload": "verify", 00:13:06.622 "status": "finished", 00:13:06.622 "verify_range": { 00:13:06.622 "start": 0, 00:13:06.622 "length": 16384 00:13:06.622 }, 00:13:06.622 "queue_depth": 1024, 00:13:06.622 "io_size": 4096, 00:13:06.622 "runtime": 10.063006, 00:13:06.622 "iops": 12509.780874621361, 00:13:06.622 "mibps": 48.86633154148969, 00:13:06.622 "io_failed": 0, 00:13:06.622 "io_timeout": 0, 00:13:06.622 "avg_latency_us": 81593.55685408492, 00:13:06.622 "min_latency_us": 24794.453333333335, 00:13:06.622 "max_latency_us": 73400.32 00:13:06.622 } 00:13:06.622 ], 00:13:06.622 "core_count": 1 00:13:06.622 } 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2546374 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2546374 ']' 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2546374 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2546374 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2546374' 00:13:06.622 killing process with pid 2546374 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2546374 00:13:06.622 Received shutdown signal, test time was about 10.000000 seconds 00:13:06.622 00:13:06.622 Latency(us) 00:13:06.622 [2024-11-20T05:24:26.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.622 [2024-11-20T05:24:26.542Z] =================================================================================================================== 00:13:06.622 [2024-11-20T05:24:26.542Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2546374 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.622 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.622 rmmod nvme_tcp 00:13:06.622 rmmod nvme_fabrics 00:13:06.622 rmmod nvme_keyring 00:13:06.883 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.883 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2546201 ']' 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2546201 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2546201 ']' 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2546201 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2546201 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2546201' 00:13:06.884 killing process with pid 2546201 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2546201 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2546201 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.884 06:24:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.437 00:13:09.437 real 0m22.785s 00:13:09.437 user 0m26.055s 00:13:09.437 sys 0m7.138s 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:09.437 ************************************ 00:13:09.437 END TEST nvmf_queue_depth 00:13:09.437 ************************************ 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:09.437 ************************************ 00:13:09.437 START TEST nvmf_target_multipath 00:13:09.437 ************************************ 00:13:09.437 06:24:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:09.437 * Looking for test storage... 00:13:09.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:09.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.437 --rc genhtml_branch_coverage=1 00:13:09.437 --rc genhtml_function_coverage=1 00:13:09.437 --rc genhtml_legend=1 00:13:09.437 --rc geninfo_all_blocks=1 00:13:09.437 --rc geninfo_unexecuted_blocks=1 00:13:09.437 00:13:09.437 ' 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:09.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.437 --rc genhtml_branch_coverage=1 00:13:09.437 --rc genhtml_function_coverage=1 00:13:09.437 --rc genhtml_legend=1 00:13:09.437 --rc geninfo_all_blocks=1 00:13:09.437 --rc geninfo_unexecuted_blocks=1 00:13:09.437 00:13:09.437 ' 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:09.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.437 --rc genhtml_branch_coverage=1 00:13:09.437 --rc genhtml_function_coverage=1 00:13:09.437 --rc genhtml_legend=1 00:13:09.437 --rc geninfo_all_blocks=1 00:13:09.437 --rc geninfo_unexecuted_blocks=1 00:13:09.437 00:13:09.437 ' 00:13:09.437 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:09.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.438 --rc genhtml_branch_coverage=1 00:13:09.438 --rc genhtml_function_coverage=1 00:13:09.438 --rc genhtml_legend=1 00:13:09.438 --rc geninfo_all_blocks=1 00:13:09.438 --rc geninfo_unexecuted_blocks=1 00:13:09.438 00:13:09.438 ' 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.438 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.439 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.439 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.439 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.439 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.439 06:24:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:17.580 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.580 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.580 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.580 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.580 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.580 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.580 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:17.581 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:17.581 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:17.581 Found net devices under 0000:31:00.0: cvl_0_0 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:17.581 Found net devices under 0000:31:00.1: cvl_0_1 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.581 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:13:17.582 00:13:17.582 --- 10.0.0.2 ping statistics --- 00:13:17.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.582 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:13:17.582 00:13:17.582 --- 10.0.0.1 ping statistics --- 00:13:17.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.582 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:17.582 only one NIC for nvmf test 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.582 rmmod nvme_tcp 00:13:17.582 rmmod nvme_fabrics 00:13:17.582 rmmod nvme_keyring 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.582 06:24:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:19.496 00:13:19.496 real 0m10.039s 00:13:19.496 user 0m2.191s 00:13:19.496 sys 0m5.784s 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:19.496 ************************************ 00:13:19.496 END TEST nvmf_target_multipath 00:13:19.496 ************************************ 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.496 06:24:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:19.496 ************************************ 00:13:19.496 START TEST nvmf_zcopy 00:13:19.496 ************************************ 00:13:19.496 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:19.496 * Looking for test storage... 00:13:19.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:19.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.497 --rc genhtml_branch_coverage=1 00:13:19.497 --rc genhtml_function_coverage=1 00:13:19.497 --rc genhtml_legend=1 00:13:19.497 --rc geninfo_all_blocks=1 00:13:19.497 --rc geninfo_unexecuted_blocks=1 00:13:19.497 00:13:19.497 ' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:19.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.497 --rc genhtml_branch_coverage=1 00:13:19.497 --rc genhtml_function_coverage=1 00:13:19.497 --rc genhtml_legend=1 00:13:19.497 --rc geninfo_all_blocks=1 00:13:19.497 --rc geninfo_unexecuted_blocks=1 00:13:19.497 00:13:19.497 ' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:19.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.497 --rc genhtml_branch_coverage=1 00:13:19.497 --rc genhtml_function_coverage=1 00:13:19.497 --rc genhtml_legend=1 00:13:19.497 --rc geninfo_all_blocks=1 00:13:19.497 --rc geninfo_unexecuted_blocks=1 00:13:19.497 00:13:19.497 ' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:19.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.497 --rc genhtml_branch_coverage=1 00:13:19.497 --rc genhtml_function_coverage=1 00:13:19.497 --rc genhtml_legend=1 00:13:19.497 --rc geninfo_all_blocks=1 00:13:19.497 --rc geninfo_unexecuted_blocks=1 00:13:19.497 00:13:19.497 ' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:19.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:19.497 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:19.498 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.498 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.498 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.498 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:19.498 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:19.498 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:19.498 06:24:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.639 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:27.640 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:27.640 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:27.640 Found net devices under 0000:31:00.0: cvl_0_0 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:27.640 Found net devices under 0000:31:00.1: cvl_0_1 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:13:27.640 00:13:27.640 --- 10.0.0.2 ping statistics --- 00:13:27.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.640 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:13:27.640 00:13:27.640 --- 10.0.0.1 ping statistics --- 00:13:27.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.640 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2557320 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2557320 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2557320 ']' 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.640 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.641 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.641 06:24:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.641 [2024-11-20 06:24:47.002129] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:13:27.641 [2024-11-20 06:24:47.002198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.641 [2024-11-20 06:24:47.099417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.641 [2024-11-20 06:24:47.149246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.641 [2024-11-20 06:24:47.149293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.641 [2024-11-20 06:24:47.149302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.641 [2024-11-20 06:24:47.149309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.641 [2024-11-20 06:24:47.149315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.641 [2024-11-20 06:24:47.150163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 [2024-11-20 06:24:47.885679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 [2024-11-20 06:24:47.909943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 malloc0 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:28.212 { 00:13:28.212 "params": { 00:13:28.212 "name": "Nvme$subsystem", 00:13:28.212 "trtype": "$TEST_TRANSPORT", 00:13:28.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:28.212 "adrfam": "ipv4", 00:13:28.212 "trsvcid": "$NVMF_PORT", 00:13:28.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:28.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:28.212 "hdgst": ${hdgst:-false}, 00:13:28.212 "ddgst": ${ddgst:-false} 00:13:28.212 }, 00:13:28.212 "method": "bdev_nvme_attach_controller" 00:13:28.212 } 00:13:28.212 EOF 00:13:28.212 )") 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:28.212 06:24:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:28.212 "params": { 00:13:28.212 "name": "Nvme1", 00:13:28.212 "trtype": "tcp", 00:13:28.212 "traddr": "10.0.0.2", 00:13:28.212 "adrfam": "ipv4", 00:13:28.212 "trsvcid": "4420", 00:13:28.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:28.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:28.212 "hdgst": false, 00:13:28.212 "ddgst": false 00:13:28.212 }, 00:13:28.212 "method": "bdev_nvme_attach_controller" 00:13:28.212 }' 00:13:28.212 [2024-11-20 06:24:48.013349] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:13:28.212 [2024-11-20 06:24:48.013413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557383 ] 00:13:28.212 [2024-11-20 06:24:48.104053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.474 [2024-11-20 06:24:48.157809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.474 Running I/O for 10 seconds... 00:13:30.799 6466.00 IOPS, 50.52 MiB/s [2024-11-20T05:24:51.662Z] 6520.00 IOPS, 50.94 MiB/s [2024-11-20T05:24:52.604Z] 6785.33 IOPS, 53.01 MiB/s [2024-11-20T05:24:53.548Z] 7521.50 IOPS, 58.76 MiB/s [2024-11-20T05:24:54.491Z] 7969.20 IOPS, 62.26 MiB/s [2024-11-20T05:24:55.435Z] 8266.50 IOPS, 64.58 MiB/s [2024-11-20T05:24:56.820Z] 8474.43 IOPS, 66.21 MiB/s [2024-11-20T05:24:57.764Z] 8632.75 IOPS, 67.44 MiB/s [2024-11-20T05:24:58.704Z] 8757.89 IOPS, 68.42 MiB/s [2024-11-20T05:24:58.704Z] 8855.40 IOPS, 69.18 MiB/s 00:13:38.784 Latency(us) 00:13:38.784 [2024-11-20T05:24:58.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:38.784 Verification LBA range: start 0x0 length 0x1000 00:13:38.784 Nvme1n1 : 10.01 8858.88 69.21 0.00 0.00 14403.46 2143.57 29272.75 00:13:38.784 [2024-11-20T05:24:58.704Z] =================================================================================================================== 00:13:38.784 [2024-11-20T05:24:58.704Z] Total : 8858.88 69.21 0.00 0.00 14403.46 2143.57 29272.75 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2559536 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:38.784 { 00:13:38.784 "params": { 00:13:38.784 "name": "Nvme$subsystem", 00:13:38.784 "trtype": "$TEST_TRANSPORT", 00:13:38.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.784 "adrfam": "ipv4", 00:13:38.784 "trsvcid": "$NVMF_PORT", 00:13:38.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.784 "hdgst": ${hdgst:-false}, 00:13:38.784 "ddgst": ${ddgst:-false} 00:13:38.784 }, 00:13:38.784 "method": "bdev_nvme_attach_controller" 00:13:38.784 } 00:13:38.784 EOF 00:13:38.784 )") 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:38.784 [2024-11-20 06:24:58.510968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.510996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:38.784 06:24:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:38.784 "params": { 00:13:38.784 "name": "Nvme1", 00:13:38.784 "trtype": "tcp", 00:13:38.784 "traddr": "10.0.0.2", 00:13:38.784 "adrfam": "ipv4", 00:13:38.784 "trsvcid": "4420", 00:13:38.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.784 "hdgst": false, 00:13:38.784 "ddgst": false 00:13:38.784 }, 00:13:38.784 "method": "bdev_nvme_attach_controller" 00:13:38.784 }' 00:13:38.784 [2024-11-20 06:24:58.522966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.522975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.534995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.535003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.547025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.547033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.552884] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:13:38.784 [2024-11-20 06:24:58.552930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559536 ] 00:13:38.784 [2024-11-20 06:24:58.559055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.559062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.571086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.571093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.583117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.583125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.595148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.595155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.607179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.607186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.784 [2024-11-20 06:24:58.619210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.784 [2024-11-20 06:24:58.619217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.785 [2024-11-20 06:24:58.631241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.785 [2024-11-20 06:24:58.631249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.785 [2024-11-20 06:24:58.635632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.785 [2024-11-20 06:24:58.643273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.785 [2024-11-20 06:24:58.643282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.785 [2024-11-20 06:24:58.655302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.785 [2024-11-20 06:24:58.655310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.785 [2024-11-20 06:24:58.664473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.785 [2024-11-20 06:24:58.667332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.785 [2024-11-20 06:24:58.667341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.785 [2024-11-20 06:24:58.679368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.785 [2024-11-20 06:24:58.679379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.785 [2024-11-20 06:24:58.691399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.785 [2024-11-20 06:24:58.691412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.703426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.703437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.715454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.715464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.727483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.727491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.739524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.739539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.751550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.751559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.763581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.763591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.775610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.775617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.787641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.787649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.799674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.799681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.811708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.811718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.823738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.823752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.835769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.835777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.879220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.879233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.887904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.887914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 Running I/O for 5 seconds... 00:13:39.045 [2024-11-20 06:24:58.903857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.903873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.916991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.917007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.930651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.930667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.943591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.943606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.045 [2024-11-20 06:24:58.956775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.045 [2024-11-20 06:24:58.956790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:58.970381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:58.970397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:58.983399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:58.983414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:58.996813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:58.996827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.010313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.010328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.023492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.023508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.036738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.036757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.050096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.050110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.063616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.063631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.076612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.076626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.090001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.090016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.323 [2024-11-20 06:24:59.103357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.323 [2024-11-20 06:24:59.103372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.116364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.116379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.129488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.129503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.142969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.142984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.155670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.155685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.168959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.168973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.182607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.182622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.195629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.195644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.209197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.209211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.222055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.222070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.324 [2024-11-20 06:24:59.235556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.324 [2024-11-20 06:24:59.235570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.248228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.248243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.261395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.261409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.275055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.275069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.287671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.287685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.301506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.301520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.315166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.315181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.328270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.328284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.342003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.342017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.354642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.354656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.367388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.367402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.380452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.380466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.393342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.393356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.406361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.406375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.418911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.418925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.431035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.431049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.444019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.444033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.456429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.456443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.469790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.469804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.483304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.483318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.607 [2024-11-20 06:24:59.496362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.607 [2024-11-20 06:24:59.496376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.508938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.875 [2024-11-20 06:24:59.508953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.521357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.875 [2024-11-20 06:24:59.521371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.534724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.875 [2024-11-20 06:24:59.534739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.547716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.875 [2024-11-20 06:24:59.547731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.560266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.875 [2024-11-20 06:24:59.560281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.573382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.875 [2024-11-20 06:24:59.573396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.586736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.875 [2024-11-20 06:24:59.586755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.875 [2024-11-20 06:24:59.600176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.600198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.613935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.613949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.627316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.627330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.640004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.640018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.653195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.653209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.666882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.666897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.679638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.679653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.692711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.692726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.705470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.705485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.718484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.718499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.731388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.731402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.744709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.744724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.757450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.757465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.769830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.769845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.876 [2024-11-20 06:24:59.782865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.876 [2024-11-20 06:24:59.782880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.796178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.796193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.809671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.809685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.822909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.822925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.835969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.835984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.849229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.849247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.862506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.862520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.875710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.875724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.888407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.888421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 19092.00 IOPS, 149.16 MiB/s [2024-11-20T05:25:00.058Z] [2024-11-20 06:24:59.900938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.900952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.913312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.913327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.925977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.925992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.939208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.939223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.952167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.952182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.965459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.965474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.978596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.978611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:24:59.991948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:24:59.991963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:25:00.006016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:25:00.006033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:25:00.018390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:25:00.018406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:25:00.031312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:25:00.031328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.138 [2024-11-20 06:25:00.044227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.138 [2024-11-20 06:25:00.044243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.057871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.057887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.071323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.071338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.084546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.084561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.097778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.097797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.110914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.110929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.123991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.124006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.136666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.136681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.149577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.149592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.162546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.162561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.175429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.175443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.188856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.188871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.202162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.202177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.215763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.215778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.228837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.228852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.241461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.241476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.255119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.255134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.400 [2024-11-20 06:25:00.268041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.400 [2024-11-20 06:25:00.268056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.401 [2024-11-20 06:25:00.281789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.401 [2024-11-20 06:25:00.281804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.401 [2024-11-20 06:25:00.294358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.401 [2024-11-20 06:25:00.294373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.401 [2024-11-20 06:25:00.307573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.401 [2024-11-20 06:25:00.307588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.320726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.320741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.334284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.334299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.347467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.347483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.361051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.361065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.374416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.374431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.387703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.387718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.401240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.401254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.414102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.414117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.427778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.427793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.441300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.441315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.454779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.454793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.467599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.467614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.481027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.481041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.493754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.493769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.507675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.507690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.521048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.662 [2024-11-20 06:25:00.521063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.662 [2024-11-20 06:25:00.534086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.663 [2024-11-20 06:25:00.534101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.663 [2024-11-20 06:25:00.547240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.663 [2024-11-20 06:25:00.547254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.663 [2024-11-20 06:25:00.560629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.663 [2024-11-20 06:25:00.560643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.663 [2024-11-20 06:25:00.573786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.663 [2024-11-20 06:25:00.573800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.586681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.586695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.599420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.599434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.613059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.613073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.626106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.626121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.638713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.638727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.650968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.650982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.664375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.664389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.677972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.677986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.691008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.691022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.704585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.704599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.717581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.717596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.730854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.730868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.743909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.743923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.757447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.757461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.770841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.770855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.783884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.783898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.797513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.797528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.810636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.810651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.823537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.823551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.923 [2024-11-20 06:25:00.837205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.923 [2024-11-20 06:25:00.837219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.850519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.850533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.864041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.864056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.876675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.876689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.890210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.890225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 19178.00 IOPS, 149.83 MiB/s [2024-11-20T05:25:01.104Z] [2024-11-20 06:25:00.903448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.903462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.916889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.916903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.930252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.930266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.943325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.943340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.956712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.956727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.970004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.970019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.983527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.983541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:00.996423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:00.996438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.009559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.009573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.022144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.022158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.035196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.035210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.048969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.048984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.061537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.061552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.074221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.074236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.087008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.087027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.184 [2024-11-20 06:25:01.100651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.184 [2024-11-20 06:25:01.100665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.113545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.113560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.127066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.127080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.140083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.140097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.153209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.153223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.165850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.165864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.178761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.178775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.192060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.192074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.205141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.205156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.218174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.218189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.231781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.231795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.244604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.244618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.257184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.257198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.270720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.270735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.283434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.283449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.296249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.296263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.309985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.309999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.323566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.323579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.336580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.336598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.445 [2024-11-20 06:25:01.350002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.445 [2024-11-20 06:25:01.350017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.362866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.362881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.375862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.375876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.388869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.388883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.402100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.402114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.414914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.414928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.428283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.428298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.441075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.441089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.454542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.454556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.467564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.467578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.480860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.480875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.494319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.494333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.507610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.507625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.521075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.521089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.534521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.534536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.547419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.547433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.560556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.560571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.574204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.574219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.587228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.587247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.600982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.600997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.707 [2024-11-20 06:25:01.614322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.707 [2024-11-20 06:25:01.614337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.627806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.627821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.640988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.641002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.654307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.654322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.667853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.667868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.680977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.680991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.694462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.694477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.707474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.707489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.721266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.721280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.734267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.734281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.747218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.747233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.760346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.760360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.773894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.773909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.787338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.787353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.799760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.799774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.812932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.812947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.826306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.826321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.839175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.839189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.851897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.851911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.865072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.865086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.968 [2024-11-20 06:25:01.878462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.968 [2024-11-20 06:25:01.878476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.891571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.891587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 19202.67 IOPS, 150.02 MiB/s [2024-11-20T05:25:02.149Z] [2024-11-20 06:25:01.905164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.905179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.918984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.918999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.932253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.932267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.945795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.945810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.958752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.958767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.972461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.972476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.985753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.985768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:01.999264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:01.999280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.012169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.012184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.025362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.025377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.038887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.038901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.052125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.052141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.065334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.065349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.078487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.078502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.091963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.091978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.105726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.105740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.119320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.119335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.132186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.132201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.229 [2024-11-20 06:25:02.145824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.229 [2024-11-20 06:25:02.145839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.159322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.159337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.172029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.172044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.185529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.185543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.198697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.198711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.211448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.211463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.225155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.225170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.237570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.237584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.250233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.250247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.263081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.263095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.276402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.276417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.289705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.289719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.303167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.303182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.315938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.315953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.328956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.328971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.342117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.342130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.355328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.355342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.368099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.368113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.380848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.380862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.394325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.394340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.490 [2024-11-20 06:25:02.406969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.490 [2024-11-20 06:25:02.406984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.420272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.420287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.433840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.433854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.446995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.447009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.460287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.460301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.473605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.473619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.486963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.486978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.499972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.499987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.513249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.513263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.525979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.525993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.538811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.538825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.552273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.552287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.564805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.564819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.577991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.578009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.590855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.590870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.603689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.603703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.617045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.617059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.629829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.629843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.642328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.642342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.751 [2024-11-20 06:25:02.655812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.751 [2024-11-20 06:25:02.655827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.668236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.668251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.680958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.680972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.693403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.693418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.707089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.707104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.719842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.719856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.733330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.733345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.746916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.746931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.760511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.760525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.772789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.772803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.785969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.785984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.799474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.799488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.812291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.812306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.825706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.825725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.838431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.838445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.850861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.850875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.864205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.864219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.877226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.877240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.889994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.890008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.903176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.903190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 19218.50 IOPS, 150.14 MiB/s [2024-11-20T05:25:02.933Z] [2024-11-20 06:25:02.916015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.916030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.013 [2024-11-20 06:25:02.929149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.013 [2024-11-20 06:25:02.929163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:02.942436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:02.942450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:02.955759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:02.955774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:02.969111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:02.969125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:02.981757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:02.981771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:02.995346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:02.995360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.008457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.008471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.022135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.022150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.035396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.035411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.048987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.049001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.061859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.061873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.075146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.075164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.088407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.088421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.101749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.101764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.115794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.115809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.128698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.128712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.141938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.141953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.154760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.154775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.168126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.168140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.274 [2024-11-20 06:25:03.180993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.274 [2024-11-20 06:25:03.181008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.193941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.193955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.206908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.206922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.220443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.220458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.233519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.233533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.247089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.247104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.260287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.260302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.273889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.273904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.287539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.287554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.300279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.300294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.313520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.313535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.326874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.326888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.339536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.339551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.352788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.352803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.365324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.365339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.378786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.378801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.391838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.391853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.404461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.404475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.417302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.417317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.430080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.430095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.535 [2024-11-20 06:25:03.443660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.535 [2024-11-20 06:25:03.443675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.457296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.457311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.470581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.470595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.484068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.484082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.497030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.497044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.509505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.509519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.522977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.522992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.536862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.536876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.549302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.549316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.562378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.562392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.575861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.575875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.589191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.589206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.602524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.602538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.615508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.615523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.629038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.629053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.641513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.641527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.654122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.654136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.666656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.666671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.679521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.679536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.692893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.692907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:43.796 [2024-11-20 06:25:03.705679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:43.796 [2024-11-20 06:25:03.705693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.718384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.718398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.731623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.731637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.745137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.745152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.758797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.758811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.772065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.772080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.785389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.785403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.798685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.798700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.812020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.812034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.825566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.825581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.838752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.838767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.852289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.852303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.865472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.865487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.878916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.878931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.891931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.891946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 [2024-11-20 06:25:03.905161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.058 [2024-11-20 06:25:03.905176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.058 19217.80 IOPS, 150.14 MiB/s 00:13:44.058 Latency(us) 00:13:44.058 [2024-11-20T05:25:03.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.059 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:44.059 Nvme1n1 : 5.01 19219.73 150.15 0.00 0.00 6654.41 3058.35 15182.51 00:13:44.059 [2024-11-20T05:25:03.979Z] =================================================================================================================== 00:13:44.059 [2024-11-20T05:25:03.979Z] Total : 19219.73 150.15 0.00 0.00 6654.41 3058.35 15182.51 00:13:44.059 [2024-11-20 06:25:03.915161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.059 [2024-11-20 06:25:03.915175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.059 [2024-11-20 06:25:03.927189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.059 [2024-11-20 06:25:03.927201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.059 [2024-11-20 06:25:03.939220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.059 [2024-11-20 06:25:03.939232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.059 [2024-11-20 06:25:03.951251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.059 [2024-11-20 06:25:03.951264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.059 [2024-11-20 06:25:03.963280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.059 [2024-11-20 06:25:03.963290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.059 [2024-11-20 06:25:03.975309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.059 [2024-11-20 06:25:03.975318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.320 [2024-11-20 06:25:03.987336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.320 [2024-11-20 06:25:03.987344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.320 [2024-11-20 06:25:03.999366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.320 [2024-11-20 06:25:03.999375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.320 [2024-11-20 06:25:04.011396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:44.320 [2024-11-20 06:25:04.011409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:44.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2559536) - No such process 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2559536 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 delay0 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.320 06:25:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:44.320 [2024-11-20 06:25:04.184120] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:52.453 Initializing NVMe Controllers 00:13:52.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:52.453 Initialization complete. Launching workers. 00:13:52.453 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 34364 00:13:52.453 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 34480, failed to submit 121 00:13:52.453 success 34398, unsuccessful 82, failed 0 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.453 rmmod nvme_tcp 00:13:52.453 rmmod nvme_fabrics 00:13:52.453 rmmod nvme_keyring 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2557320 ']' 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2557320 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2557320 ']' 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2557320 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2557320 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2557320' 00:13:52.453 killing process with pid 2557320 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2557320 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2557320 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.453 06:25:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.838 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.838 00:13:53.838 real 0m34.515s 00:13:53.838 user 0m45.152s 00:13:53.838 sys 0m11.971s 00:13:53.838 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:53.838 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:53.838 ************************************ 00:13:53.838 END TEST nvmf_zcopy 00:13:53.838 ************************************ 00:13:53.838 06:25:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:53.838 06:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:53.839 06:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:53.839 06:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:53.839 ************************************ 00:13:53.839 START TEST nvmf_nmic 00:13:53.839 ************************************ 00:13:53.839 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:53.839 * Looking for test storage... 00:13:53.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.839 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:53.839 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:13:53.839 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.100 --rc genhtml_branch_coverage=1 00:13:54.100 --rc genhtml_function_coverage=1 00:13:54.100 --rc genhtml_legend=1 00:13:54.100 --rc geninfo_all_blocks=1 00:13:54.100 --rc geninfo_unexecuted_blocks=1 00:13:54.100 00:13:54.100 ' 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.100 --rc genhtml_branch_coverage=1 00:13:54.100 --rc genhtml_function_coverage=1 00:13:54.100 --rc genhtml_legend=1 00:13:54.100 --rc geninfo_all_blocks=1 00:13:54.100 --rc geninfo_unexecuted_blocks=1 00:13:54.100 00:13:54.100 ' 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.100 --rc genhtml_branch_coverage=1 00:13:54.100 --rc genhtml_function_coverage=1 00:13:54.100 --rc genhtml_legend=1 00:13:54.100 --rc geninfo_all_blocks=1 00:13:54.100 --rc geninfo_unexecuted_blocks=1 00:13:54.100 00:13:54.100 ' 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.100 --rc genhtml_branch_coverage=1 00:13:54.100 --rc genhtml_function_coverage=1 00:13:54.100 --rc genhtml_legend=1 00:13:54.100 --rc geninfo_all_blocks=1 00:13:54.100 --rc geninfo_unexecuted_blocks=1 00:13:54.100 00:13:54.100 ' 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.100 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.101 06:25:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:02.244 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:02.244 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:02.244 Found net devices under 0000:31:00.0: cvl_0_0 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:02.244 Found net devices under 0000:31:00.1: cvl_0_1 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.244 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:14:02.245 00:14:02.245 --- 10.0.0.2 ping statistics --- 00:14:02.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.245 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:14:02.245 00:14:02.245 --- 10.0.0.1 ping statistics --- 00:14:02.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.245 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2566418 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2566418 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2566418 ']' 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:02.245 06:25:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.245 [2024-11-20 06:25:21.611661] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:14:02.245 [2024-11-20 06:25:21.611724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.245 [2024-11-20 06:25:21.709117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.245 [2024-11-20 06:25:21.764002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.245 [2024-11-20 06:25:21.764053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.245 [2024-11-20 06:25:21.764062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.245 [2024-11-20 06:25:21.764069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.245 [2024-11-20 06:25:21.764075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.245 [2024-11-20 06:25:21.766148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.245 [2024-11-20 06:25:21.766309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.245 [2024-11-20 06:25:21.766473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.245 [2024-11-20 06:25:21.766473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.817 [2024-11-20 06:25:22.492547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.817 Malloc0 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.817 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 [2024-11-20 06:25:22.573558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:02.818 test case1: single bdev can't be used in multiple subsystems 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 [2024-11-20 06:25:22.609458] bdev.c:8311:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:02.818 [2024-11-20 06:25:22.609485] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:02.818 [2024-11-20 06:25:22.609495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.818 request: 00:14:02.818 { 00:14:02.818 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:02.818 "namespace": { 00:14:02.818 "bdev_name": "Malloc0", 00:14:02.818 "no_auto_visible": false 00:14:02.818 }, 00:14:02.818 "method": "nvmf_subsystem_add_ns", 00:14:02.818 "req_id": 1 00:14:02.818 } 00:14:02.818 Got JSON-RPC error response 00:14:02.818 response: 00:14:02.818 { 00:14:02.818 "code": -32602, 00:14:02.818 "message": "Invalid parameters" 00:14:02.818 } 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:02.818 Adding namespace failed - expected result. 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:02.818 test case2: host connect to nvmf target in multiple paths 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.818 [2024-11-20 06:25:22.621662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.818 06:25:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.731 06:25:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:06.114 06:25:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.114 06:25:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:14:06.114 06:25:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.114 06:25:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:06.114 06:25:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:14:08.024 06:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:08.024 06:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:08.024 06:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.024 06:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:08.024 06:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.024 06:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:14:08.024 06:25:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:08.024 [global] 00:14:08.024 thread=1 00:14:08.024 invalidate=1 00:14:08.024 rw=write 00:14:08.024 time_based=1 00:14:08.024 runtime=1 00:14:08.024 ioengine=libaio 00:14:08.024 direct=1 00:14:08.024 bs=4096 00:14:08.024 iodepth=1 00:14:08.024 norandommap=0 00:14:08.024 numjobs=1 00:14:08.024 00:14:08.024 verify_dump=1 00:14:08.024 verify_backlog=512 00:14:08.024 verify_state_save=0 00:14:08.024 do_verify=1 00:14:08.024 verify=crc32c-intel 00:14:08.024 [job0] 00:14:08.024 filename=/dev/nvme0n1 00:14:08.024 Could not set queue depth (nvme0n1) 00:14:08.284 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:08.284 fio-3.35 00:14:08.284 Starting 1 thread 00:14:09.668 00:14:09.668 job0: (groupid=0, jobs=1): err= 0: pid=2567888: Wed Nov 20 06:25:29 2024 00:14:09.668 read: IOPS=18, BW=75.5KiB/s (77.4kB/s)(76.0KiB/1006msec) 00:14:09.668 slat (nsec): min=25141, max=25817, avg=25369.47, stdev=159.29 00:14:09.668 clat (usec): min=41489, max=42038, avg=41939.77, stdev=116.97 00:14:09.668 lat (usec): min=41515, max=42064, avg=41965.14, stdev=116.87 00:14:09.668 clat percentiles (usec): 00:14:09.668 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:14:09.668 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:09.668 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:09.668 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:09.668 | 99.99th=[42206] 00:14:09.668 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:14:09.668 slat (nsec): min=8991, max=51150, avg=18483.24, stdev=11727.31 00:14:09.668 clat (usec): min=118, max=787, avg=383.70, stdev=130.35 00:14:09.668 lat (usec): min=127, max=819, avg=402.18, stdev=138.63 00:14:09.668 clat percentiles (usec): 00:14:09.668 | 1.00th=[ 161], 5.00th=[ 237], 10.00th=[ 253], 20.00th=[ 269], 00:14:09.668 | 30.00th=[ 297], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 392], 00:14:09.668 | 70.00th=[ 437], 80.00th=[ 490], 90.00th=[ 586], 95.00th=[ 635], 00:14:09.668 | 99.00th=[ 717], 99.50th=[ 734], 99.90th=[ 791], 99.95th=[ 791], 00:14:09.668 | 99.99th=[ 791] 00:14:09.668 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:09.668 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:09.668 lat (usec) : 250=8.10%, 500=69.87%, 750=18.27%, 1000=0.19% 00:14:09.668 lat (msec) : 50=3.58% 00:14:09.668 cpu : usr=0.60%, sys=1.00%, ctx=531, majf=0, minf=1 00:14:09.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:09.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.668 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:09.668 00:14:09.668 Run status group 0 (all jobs): 00:14:09.668 READ: bw=75.5KiB/s (77.4kB/s), 75.5KiB/s-75.5KiB/s (77.4kB/s-77.4kB/s), io=76.0KiB (77.8kB), run=1006-1006msec 00:14:09.668 WRITE: bw=2036KiB/s (2085kB/s), 2036KiB/s-2036KiB/s (2085kB/s-2085kB/s), io=2048KiB (2097kB), run=1006-1006msec 00:14:09.668 00:14:09.668 Disk stats (read/write): 00:14:09.668 nvme0n1: ios=66/512, merge=0/0, ticks=730/185, in_queue=915, util=93.49% 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.668 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.669 rmmod nvme_tcp 00:14:09.669 rmmod nvme_fabrics 00:14:09.669 rmmod nvme_keyring 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2566418 ']' 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2566418 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2566418 ']' 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2566418 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2566418 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2566418' 00:14:09.669 killing process with pid 2566418 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2566418 00:14:09.669 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2566418 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.929 06:25:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.473 00:14:12.473 real 0m18.151s 00:14:12.473 user 0m48.955s 00:14:12.473 sys 0m6.636s 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:12.473 ************************************ 00:14:12.473 END TEST nvmf_nmic 00:14:12.473 ************************************ 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:12.473 ************************************ 00:14:12.473 START TEST nvmf_fio_target 00:14:12.473 ************************************ 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:12.473 * Looking for test storage... 00:14:12.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:12.473 06:25:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.474 --rc genhtml_branch_coverage=1 00:14:12.474 --rc genhtml_function_coverage=1 00:14:12.474 --rc genhtml_legend=1 00:14:12.474 --rc geninfo_all_blocks=1 00:14:12.474 --rc geninfo_unexecuted_blocks=1 00:14:12.474 00:14:12.474 ' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.474 --rc genhtml_branch_coverage=1 00:14:12.474 --rc genhtml_function_coverage=1 00:14:12.474 --rc genhtml_legend=1 00:14:12.474 --rc geninfo_all_blocks=1 00:14:12.474 --rc geninfo_unexecuted_blocks=1 00:14:12.474 00:14:12.474 ' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.474 --rc genhtml_branch_coverage=1 00:14:12.474 --rc genhtml_function_coverage=1 00:14:12.474 --rc genhtml_legend=1 00:14:12.474 --rc geninfo_all_blocks=1 00:14:12.474 --rc geninfo_unexecuted_blocks=1 00:14:12.474 00:14:12.474 ' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:12.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.474 --rc genhtml_branch_coverage=1 00:14:12.474 --rc genhtml_function_coverage=1 00:14:12.474 --rc genhtml_legend=1 00:14:12.474 --rc geninfo_all_blocks=1 00:14:12.474 --rc geninfo_unexecuted_blocks=1 00:14:12.474 00:14:12.474 ' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:12.474 06:25:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:20.622 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:20.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:20.622 Found net devices under 0000:31:00.0: cvl_0_0 00:14:20.622 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:20.623 Found net devices under 0000:31:00.1: cvl_0_1 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:20.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:14:20.623 00:14:20.623 --- 10.0.0.2 ping statistics --- 00:14:20.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.623 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:14:20.623 00:14:20.623 --- 10.0.0.1 ping statistics --- 00:14:20.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.623 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2572358 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2572358 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2572358 ']' 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:20.623 06:25:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.623 [2024-11-20 06:25:39.799313] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:14:20.623 [2024-11-20 06:25:39.799378] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.623 [2024-11-20 06:25:39.899875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.623 [2024-11-20 06:25:39.953143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.623 [2024-11-20 06:25:39.953199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.623 [2024-11-20 06:25:39.953208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.623 [2024-11-20 06:25:39.953216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.623 [2024-11-20 06:25:39.953222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.623 [2024-11-20 06:25:39.955322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.623 [2024-11-20 06:25:39.955485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.623 [2024-11-20 06:25:39.955650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.623 [2024-11-20 06:25:39.955648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.883 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:20.883 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:14:20.883 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.883 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:20.883 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.883 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.883 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.143 [2024-11-20 06:25:40.836956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.143 06:25:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.404 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:21.404 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.664 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:21.664 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.664 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:21.664 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:21.925 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:21.925 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:22.187 06:25:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.447 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:22.447 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.708 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:22.708 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:22.708 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:22.708 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:22.968 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:23.228 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:23.228 06:25:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.488 06:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:23.489 06:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.489 06:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.749 [2024-11-20 06:25:43.477154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.749 06:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:24.009 06:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:24.009 06:25:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.921 06:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:25.921 06:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:14:25.921 06:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.921 06:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:14:25.921 06:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:14:25.921 06:25:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:14:27.864 06:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:27.864 06:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:27.864 06:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.864 06:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:14:27.864 06:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.865 06:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:14:27.865 06:25:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:27.865 [global] 00:14:27.865 thread=1 00:14:27.865 invalidate=1 00:14:27.865 rw=write 00:14:27.865 time_based=1 00:14:27.865 runtime=1 00:14:27.865 ioengine=libaio 00:14:27.865 direct=1 00:14:27.865 bs=4096 00:14:27.865 iodepth=1 00:14:27.865 norandommap=0 00:14:27.865 numjobs=1 00:14:27.865 00:14:27.865 verify_dump=1 00:14:27.865 verify_backlog=512 00:14:27.865 verify_state_save=0 00:14:27.865 do_verify=1 00:14:27.865 verify=crc32c-intel 00:14:27.865 [job0] 00:14:27.865 filename=/dev/nvme0n1 00:14:27.865 [job1] 00:14:27.865 filename=/dev/nvme0n2 00:14:27.865 [job2] 00:14:27.865 filename=/dev/nvme0n3 00:14:27.865 [job3] 00:14:27.865 filename=/dev/nvme0n4 00:14:27.865 Could not set queue depth (nvme0n1) 00:14:27.865 Could not set queue depth (nvme0n2) 00:14:27.865 Could not set queue depth (nvme0n3) 00:14:27.865 Could not set queue depth (nvme0n4) 00:14:28.132 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:28.132 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:28.132 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:28.132 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:28.132 fio-3.35 00:14:28.132 Starting 4 threads 00:14:29.545 00:14:29.545 job0: (groupid=0, jobs=1): err= 0: pid=2574280: Wed Nov 20 06:25:49 2024 00:14:29.545 read: IOPS=542, BW=2172KiB/s (2224kB/s)(2200KiB/1013msec) 00:14:29.545 slat (nsec): min=6868, max=57144, avg=23221.60, stdev=8140.97 00:14:29.545 clat (usec): min=444, max=42021, avg=949.48, stdev=3021.10 00:14:29.545 lat (usec): min=470, max=42047, avg=972.70, stdev=3021.33 00:14:29.545 clat percentiles (usec): 00:14:29.545 | 1.00th=[ 490], 5.00th=[ 578], 10.00th=[ 619], 20.00th=[ 660], 00:14:29.545 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 758], 00:14:29.545 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:14:29.545 | 99.00th=[ 947], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:29.545 | 99.99th=[42206] 00:14:29.545 write: IOPS=1010, BW=4043KiB/s (4140kB/s)(4096KiB/1013msec); 0 zone resets 00:14:29.545 slat (nsec): min=9696, max=53224, avg=28612.99, stdev=10334.19 00:14:29.546 clat (usec): min=112, max=1157, avg=426.89, stdev=86.72 00:14:29.546 lat (usec): min=125, max=1208, avg=455.50, stdev=91.64 00:14:29.546 clat percentiles (usec): 00:14:29.546 | 1.00th=[ 245], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 347], 00:14:29.546 | 30.00th=[ 379], 40.00th=[ 420], 50.00th=[ 441], 60.00th=[ 453], 00:14:29.546 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 519], 95.00th=[ 553], 00:14:29.546 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 1020], 99.95th=[ 1156], 00:14:29.546 | 99.99th=[ 1156] 00:14:29.546 bw ( KiB/s): min= 4096, max= 4096, per=40.72%, avg=4096.00, stdev= 0.00, samples=2 00:14:29.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:14:29.546 lat (usec) : 250=0.76%, 500=54.76%, 750=29.22%, 1000=14.93% 00:14:29.546 lat (msec) : 2=0.13%, 50=0.19% 00:14:29.546 cpu : usr=2.08%, sys=4.35%, ctx=1574, majf=0, minf=1 00:14:29.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:29.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 issued rwts: total=550,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:29.546 job1: (groupid=0, jobs=1): err= 0: pid=2574281: Wed Nov 20 06:25:49 2024 00:14:29.546 read: IOPS=18, BW=75.0KiB/s (76.8kB/s)(76.0KiB/1013msec) 00:14:29.546 slat (nsec): min=26615, max=27976, avg=27458.37, stdev=348.06 00:14:29.546 clat (usec): min=40795, max=41753, avg=41000.68, stdev=204.94 00:14:29.546 lat (usec): min=40823, max=41780, avg=41028.14, stdev=204.91 00:14:29.546 clat percentiles (usec): 00:14:29.546 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:14:29.546 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:29.546 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:14:29.546 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:29.546 | 99.99th=[41681] 00:14:29.546 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:14:29.546 slat (nsec): min=9908, max=86581, avg=28560.38, stdev=11573.49 00:14:29.546 clat (usec): min=217, max=636, avg=416.69, stdev=76.93 00:14:29.546 lat (usec): min=253, max=671, avg=445.25, stdev=82.70 00:14:29.546 clat percentiles (usec): 00:14:29.546 | 1.00th=[ 243], 5.00th=[ 281], 10.00th=[ 310], 20.00th=[ 338], 00:14:29.546 | 30.00th=[ 375], 40.00th=[ 416], 50.00th=[ 433], 60.00th=[ 453], 00:14:29.546 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 519], 00:14:29.546 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 635], 99.95th=[ 635], 00:14:29.546 | 99.99th=[ 635] 00:14:29.546 bw ( KiB/s): min= 4096, max= 4096, per=40.72%, avg=4096.00, stdev= 0.00, samples=1 00:14:29.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:29.546 lat (usec) : 250=1.51%, 500=84.93%, 750=9.98% 00:14:29.546 lat (msec) : 50=3.58% 00:14:29.546 cpu : usr=0.49%, sys=1.68%, ctx=533, majf=0, minf=1 00:14:29.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:29.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:29.546 job2: (groupid=0, jobs=1): err= 0: pid=2574283: Wed Nov 20 06:25:49 2024 00:14:29.546 read: IOPS=18, BW=74.9KiB/s (76.7kB/s)(76.0KiB/1015msec) 00:14:29.546 slat (nsec): min=27170, max=28216, avg=27755.68, stdev=257.48 00:14:29.546 clat (usec): min=40888, max=41123, avg=40968.93, stdev=52.05 00:14:29.546 lat (usec): min=40916, max=41150, avg=40996.69, stdev=52.01 00:14:29.546 clat percentiles (usec): 00:14:29.546 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:29.546 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:29.546 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:29.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:29.546 | 99.99th=[41157] 00:14:29.546 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:14:29.546 slat (nsec): min=10234, max=66541, avg=28644.76, stdev=11585.99 00:14:29.546 clat (usec): min=207, max=678, avg=421.86, stdev=82.69 00:14:29.546 lat (usec): min=244, max=713, avg=450.50, stdev=89.13 00:14:29.546 clat percentiles (usec): 00:14:29.546 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 343], 00:14:29.546 | 30.00th=[ 371], 40.00th=[ 416], 50.00th=[ 437], 60.00th=[ 453], 00:14:29.546 | 70.00th=[ 469], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 553], 00:14:29.546 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 676], 99.95th=[ 676], 00:14:29.546 | 99.99th=[ 676] 00:14:29.546 bw ( KiB/s): min= 4096, max= 4096, per=40.72%, avg=4096.00, stdev= 0.00, samples=1 00:14:29.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:29.546 lat (usec) : 250=0.38%, 500=80.04%, 750=16.01% 00:14:29.546 lat (msec) : 50=3.58% 00:14:29.546 cpu : usr=0.89%, sys=1.28%, ctx=532, majf=0, minf=1 00:14:29.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:29.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:29.546 job3: (groupid=0, jobs=1): err= 0: pid=2574284: Wed Nov 20 06:25:49 2024 00:14:29.546 read: IOPS=405, BW=1623KiB/s (1662kB/s)(1652KiB/1018msec) 00:14:29.546 slat (nsec): min=26115, max=45106, avg=26916.85, stdev=2036.15 00:14:29.546 clat (usec): min=548, max=41039, avg=1633.17, stdev=5172.17 00:14:29.546 lat (usec): min=574, max=41067, avg=1660.09, stdev=5172.31 00:14:29.546 clat percentiles (usec): 00:14:29.546 | 1.00th=[ 652], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 857], 00:14:29.546 | 30.00th=[ 914], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1012], 00:14:29.546 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:14:29.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:29.546 | 99.99th=[41157] 00:14:29.546 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:14:29.546 slat (nsec): min=10732, max=65367, avg=32664.55, stdev=8660.08 00:14:29.546 clat (usec): min=249, max=1027, avg=596.45, stdev=139.22 00:14:29.546 lat (usec): min=260, max=1062, avg=629.11, stdev=141.76 00:14:29.546 clat percentiles (usec): 00:14:29.546 | 1.00th=[ 281], 5.00th=[ 359], 10.00th=[ 412], 20.00th=[ 486], 00:14:29.546 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 619], 00:14:29.546 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 775], 95.00th=[ 816], 00:14:29.546 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1029], 99.95th=[ 1029], 00:14:29.546 | 99.99th=[ 1029] 00:14:29.546 bw ( KiB/s): min= 4096, max= 4096, per=40.72%, avg=4096.00, stdev= 0.00, samples=1 00:14:29.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:29.546 lat (usec) : 250=0.11%, 500=13.30%, 750=36.65%, 1000=30.59% 00:14:29.546 lat (msec) : 2=18.59%, 50=0.76% 00:14:29.546 cpu : usr=1.38%, sys=2.85%, ctx=926, majf=0, minf=1 00:14:29.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:29.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.546 issued rwts: total=413,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:29.546 00:14:29.546 Run status group 0 (all jobs): 00:14:29.546 READ: bw=3933KiB/s (4028kB/s), 74.9KiB/s-2172KiB/s (76.7kB/s-2224kB/s), io=4004KiB (4100kB), run=1013-1018msec 00:14:29.546 WRITE: bw=9.82MiB/s (10.3MB/s), 2012KiB/s-4043KiB/s (2060kB/s-4140kB/s), io=10.0MiB (10.5MB), run=1013-1018msec 00:14:29.546 00:14:29.546 Disk stats (read/write): 00:14:29.546 nvme0n1: ios=562/984, merge=0/0, ticks=397/403, in_queue=800, util=86.87% 00:14:29.546 nvme0n2: ios=63/512, merge=0/0, ticks=1438/211, in_queue=1649, util=88.38% 00:14:29.546 nvme0n3: ios=71/512, merge=0/0, ticks=1302/208, in_queue=1510, util=92.53% 00:14:29.546 nvme0n4: ios=465/512, merge=0/0, ticks=1067/291, in_queue=1358, util=93.94% 00:14:29.546 06:25:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:29.546 [global] 00:14:29.546 thread=1 00:14:29.546 invalidate=1 00:14:29.546 rw=randwrite 00:14:29.546 time_based=1 00:14:29.546 runtime=1 00:14:29.546 ioengine=libaio 00:14:29.546 direct=1 00:14:29.546 bs=4096 00:14:29.546 iodepth=1 00:14:29.546 norandommap=0 00:14:29.546 numjobs=1 00:14:29.546 00:14:29.546 verify_dump=1 00:14:29.546 verify_backlog=512 00:14:29.546 verify_state_save=0 00:14:29.546 do_verify=1 00:14:29.546 verify=crc32c-intel 00:14:29.546 [job0] 00:14:29.546 filename=/dev/nvme0n1 00:14:29.546 [job1] 00:14:29.546 filename=/dev/nvme0n2 00:14:29.546 [job2] 00:14:29.546 filename=/dev/nvme0n3 00:14:29.546 [job3] 00:14:29.546 filename=/dev/nvme0n4 00:14:29.546 Could not set queue depth (nvme0n1) 00:14:29.546 Could not set queue depth (nvme0n2) 00:14:29.546 Could not set queue depth (nvme0n3) 00:14:29.546 Could not set queue depth (nvme0n4) 00:14:29.808 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.808 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.808 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.808 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.808 fio-3.35 00:14:29.808 Starting 4 threads 00:14:31.216 00:14:31.216 job0: (groupid=0, jobs=1): err= 0: pid=2574808: Wed Nov 20 06:25:50 2024 00:14:31.216 read: IOPS=18, BW=74.6KiB/s (76.4kB/s)(76.0KiB/1019msec) 00:14:31.216 slat (nsec): min=24815, max=25921, avg=25281.42, stdev=326.10 00:14:31.216 clat (usec): min=40893, max=41012, avg=40966.31, stdev=35.15 00:14:31.216 lat (usec): min=40918, max=41037, avg=40991.59, stdev=35.19 00:14:31.216 clat percentiles (usec): 00:14:31.216 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:31.216 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:31.216 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:31.216 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:31.216 | 99.99th=[41157] 00:14:31.216 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:14:31.216 slat (nsec): min=9247, max=68019, avg=28560.63, stdev=7969.49 00:14:31.216 clat (usec): min=186, max=755, avg=432.98, stdev=110.77 00:14:31.216 lat (usec): min=209, max=767, avg=461.54, stdev=112.64 00:14:31.216 clat percentiles (usec): 00:14:31.216 | 1.00th=[ 217], 5.00th=[ 260], 10.00th=[ 297], 20.00th=[ 334], 00:14:31.216 | 30.00th=[ 355], 40.00th=[ 392], 50.00th=[ 437], 60.00th=[ 461], 00:14:31.217 | 70.00th=[ 494], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 611], 00:14:31.217 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 758], 99.95th=[ 758], 00:14:31.217 | 99.99th=[ 758] 00:14:31.217 bw ( KiB/s): min= 4096, max= 4096, per=47.23%, avg=4096.00, stdev= 0.00, samples=1 00:14:31.217 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:31.217 lat (usec) : 250=4.33%, 500=65.73%, 750=26.18%, 1000=0.19% 00:14:31.217 lat (msec) : 50=3.58% 00:14:31.217 cpu : usr=0.49%, sys=1.77%, ctx=532, majf=0, minf=1 00:14:31.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:31.217 job1: (groupid=0, jobs=1): err= 0: pid=2574809: Wed Nov 20 06:25:50 2024 00:14:31.217 read: IOPS=17, BW=69.6KiB/s (71.3kB/s)(72.0KiB/1034msec) 00:14:31.217 slat (nsec): min=24304, max=28660, avg=26137.83, stdev=1120.84 00:14:31.217 clat (usec): min=1095, max=42024, avg=39540.17, stdev=9599.95 00:14:31.217 lat (usec): min=1123, max=42051, avg=39566.31, stdev=9599.31 00:14:31.217 clat percentiles (usec): 00:14:31.217 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41157], 20.00th=[41157], 00:14:31.217 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:14:31.217 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:31.217 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:31.217 | 99.99th=[42206] 00:14:31.217 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:14:31.217 slat (nsec): min=8939, max=58328, avg=27927.44, stdev=9894.98 00:14:31.217 clat (usec): min=253, max=1100, avg=593.74, stdev=120.89 00:14:31.217 lat (usec): min=269, max=1137, avg=621.66, stdev=125.09 00:14:31.217 clat percentiles (usec): 00:14:31.217 | 1.00th=[ 306], 5.00th=[ 371], 10.00th=[ 441], 20.00th=[ 490], 00:14:31.217 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:14:31.217 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 766], 00:14:31.217 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 1106], 99.95th=[ 1106], 00:14:31.217 | 99.99th=[ 1106] 00:14:31.217 bw ( KiB/s): min= 4096, max= 4096, per=47.23%, avg=4096.00, stdev= 0.00, samples=1 00:14:31.217 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:31.217 lat (usec) : 500=21.89%, 750=66.79%, 1000=7.55% 00:14:31.217 lat (msec) : 2=0.57%, 50=3.21% 00:14:31.217 cpu : usr=1.16%, sys=1.55%, ctx=530, majf=0, minf=1 00:14:31.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:31.217 job2: (groupid=0, jobs=1): err= 0: pid=2574810: Wed Nov 20 06:25:50 2024 00:14:31.217 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:31.217 slat (nsec): min=26815, max=59178, avg=27403.00, stdev=1969.32 00:14:31.217 clat (usec): min=443, max=1175, avg=970.26, stdev=73.07 00:14:31.217 lat (usec): min=471, max=1202, avg=997.66, stdev=73.02 00:14:31.217 clat percentiles (usec): 00:14:31.217 | 1.00th=[ 766], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 938], 00:14:31.217 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:14:31.217 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:14:31.217 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:14:31.217 | 99.99th=[ 1172] 00:14:31.217 write: IOPS=705, BW=2821KiB/s (2889kB/s)(2824KiB/1001msec); 0 zone resets 00:14:31.217 slat (nsec): min=9197, max=67919, avg=29594.54, stdev=9732.80 00:14:31.217 clat (usec): min=195, max=2270, avg=649.88, stdev=132.82 00:14:31.217 lat (usec): min=205, max=2302, avg=679.47, stdev=136.68 00:14:31.217 clat percentiles (usec): 00:14:31.217 | 1.00th=[ 322], 5.00th=[ 424], 10.00th=[ 502], 20.00th=[ 562], 00:14:31.217 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 676], 00:14:31.217 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 832], 00:14:31.217 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 2278], 99.95th=[ 2278], 00:14:31.217 | 99.99th=[ 2278] 00:14:31.217 bw ( KiB/s): min= 4096, max= 4096, per=47.23%, avg=4096.00, stdev= 0.00, samples=1 00:14:31.217 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:31.217 lat (usec) : 250=0.16%, 500=5.58%, 750=41.13%, 1000=40.23% 00:14:31.217 lat (msec) : 2=12.81%, 4=0.08% 00:14:31.217 cpu : usr=3.00%, sys=4.30%, ctx=1218, majf=0, minf=1 00:14:31.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 issued rwts: total=512,706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:31.217 job3: (groupid=0, jobs=1): err= 0: pid=2574811: Wed Nov 20 06:25:50 2024 00:14:31.217 read: IOPS=17, BW=69.6KiB/s (71.3kB/s)(72.0KiB/1034msec) 00:14:31.217 slat (nsec): min=26744, max=27970, avg=27266.78, stdev=299.77 00:14:31.217 clat (usec): min=1125, max=42093, avg=39083.19, stdev=9485.41 00:14:31.217 lat (usec): min=1152, max=42120, avg=39110.45, stdev=9485.41 00:14:31.217 clat percentiles (usec): 00:14:31.217 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[40633], 20.00th=[41157], 00:14:31.217 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:31.217 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:31.217 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:31.217 | 99.99th=[42206] 00:14:31.217 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:14:31.217 slat (nsec): min=9215, max=52458, avg=30590.80, stdev=8582.54 00:14:31.217 clat (usec): min=117, max=1081, avg=606.42, stdev=117.38 00:14:31.217 lat (usec): min=127, max=1115, avg=637.01, stdev=120.30 00:14:31.217 clat percentiles (usec): 00:14:31.217 | 1.00th=[ 326], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 515], 00:14:31.217 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:14:31.217 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 766], 00:14:31.217 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 1090], 99.95th=[ 1090], 00:14:31.217 | 99.99th=[ 1090] 00:14:31.217 bw ( KiB/s): min= 4096, max= 4096, per=47.23%, avg=4096.00, stdev= 0.00, samples=1 00:14:31.217 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:31.217 lat (usec) : 250=0.38%, 500=16.79%, 750=71.13%, 1000=7.92% 00:14:31.217 lat (msec) : 2=0.57%, 50=3.21% 00:14:31.217 cpu : usr=1.16%, sys=1.94%, ctx=530, majf=0, minf=1 00:14:31.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:31.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.217 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:31.217 00:14:31.217 Run status group 0 (all jobs): 00:14:31.217 READ: bw=2193KiB/s (2246kB/s), 69.6KiB/s-2046KiB/s (71.3kB/s-2095kB/s), io=2268KiB (2322kB), run=1001-1034msec 00:14:31.217 WRITE: bw=8673KiB/s (8881kB/s), 1981KiB/s-2821KiB/s (2028kB/s-2889kB/s), io=8968KiB (9183kB), run=1001-1034msec 00:14:31.217 00:14:31.217 Disk stats (read/write): 00:14:31.217 nvme0n1: ios=39/512, merge=0/0, ticks=699/223, in_queue=922, util=93.39% 00:14:31.217 nvme0n2: ios=62/512, merge=0/0, ticks=562/243, in_queue=805, util=89.52% 00:14:31.218 nvme0n3: ios=537/512, merge=0/0, ticks=530/287, in_queue=817, util=92.86% 00:14:31.218 nvme0n4: ios=70/512, merge=0/0, ticks=605/231, in_queue=836, util=96.29% 00:14:31.218 06:25:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:31.218 [global] 00:14:31.218 thread=1 00:14:31.218 invalidate=1 00:14:31.218 rw=write 00:14:31.218 time_based=1 00:14:31.218 runtime=1 00:14:31.218 ioengine=libaio 00:14:31.218 direct=1 00:14:31.218 bs=4096 00:14:31.218 iodepth=128 00:14:31.218 norandommap=0 00:14:31.218 numjobs=1 00:14:31.218 00:14:31.218 verify_dump=1 00:14:31.218 verify_backlog=512 00:14:31.218 verify_state_save=0 00:14:31.218 do_verify=1 00:14:31.218 verify=crc32c-intel 00:14:31.218 [job0] 00:14:31.218 filename=/dev/nvme0n1 00:14:31.218 [job1] 00:14:31.218 filename=/dev/nvme0n2 00:14:31.218 [job2] 00:14:31.218 filename=/dev/nvme0n3 00:14:31.218 [job3] 00:14:31.218 filename=/dev/nvme0n4 00:14:31.218 Could not set queue depth (nvme0n1) 00:14:31.218 Could not set queue depth (nvme0n2) 00:14:31.218 Could not set queue depth (nvme0n3) 00:14:31.218 Could not set queue depth (nvme0n4) 00:14:31.483 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.483 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.483 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.483 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.483 fio-3.35 00:14:31.483 Starting 4 threads 00:14:32.894 00:14:32.894 job0: (groupid=0, jobs=1): err= 0: pid=2575329: Wed Nov 20 06:25:52 2024 00:14:32.894 read: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1004msec) 00:14:32.894 slat (nsec): min=933, max=44614k, avg=105863.14, stdev=1077596.95 00:14:32.894 clat (usec): min=820, max=87564, avg=14490.82, stdev=10393.52 00:14:32.894 lat (usec): min=3585, max=87591, avg=14596.68, stdev=10497.24 00:14:32.894 clat percentiles (usec): 00:14:32.894 | 1.00th=[ 3982], 5.00th=[ 7046], 10.00th=[ 7308], 20.00th=[ 7701], 00:14:32.894 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[11994], 60.00th=[13435], 00:14:32.894 | 70.00th=[14877], 80.00th=[16450], 90.00th=[26870], 95.00th=[37487], 00:14:32.894 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:14:32.894 | 99.99th=[87557] 00:14:32.894 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:14:32.894 slat (nsec): min=1708, max=12251k, avg=102234.11, stdev=664668.17 00:14:32.894 clat (usec): min=4092, max=68209, avg=13956.50, stdev=11829.66 00:14:32.894 lat (usec): min=4100, max=68217, avg=14058.73, stdev=11907.13 00:14:32.894 clat percentiles (usec): 00:14:32.894 | 1.00th=[ 5211], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7439], 00:14:32.894 | 30.00th=[ 8029], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10945], 00:14:32.894 | 70.00th=[11469], 80.00th=[14877], 90.00th=[25035], 95.00th=[49546], 00:14:32.894 | 99.00th=[53216], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:14:32.894 | 99.99th=[68682] 00:14:32.894 bw ( KiB/s): min=18144, max=18720, per=21.62%, avg=18432.00, stdev=407.29, samples=2 00:14:32.894 iops : min= 4536, max= 4680, avg=4608.00, stdev=101.82, samples=2 00:14:32.894 lat (usec) : 1000=0.01% 00:14:32.894 lat (msec) : 4=0.47%, 10=50.34%, 20=34.55%, 50=11.36%, 100=3.26% 00:14:32.894 cpu : usr=3.89%, sys=4.59%, ctx=316, majf=0, minf=2 00:14:32.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:32.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:32.894 issued rwts: total=4288,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:32.894 job1: (groupid=0, jobs=1): err= 0: pid=2575330: Wed Nov 20 06:25:52 2024 00:14:32.894 read: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(18.5MiB/1044msec) 00:14:32.894 slat (nsec): min=1022, max=13911k, avg=103350.18, stdev=772268.34 00:14:32.894 clat (usec): min=1593, max=108402, avg=13511.49, stdev=12430.98 00:14:32.894 lat (usec): min=1599, max=108410, avg=13614.84, stdev=12529.04 00:14:32.894 clat percentiles (msec): 00:14:32.894 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 7], 00:14:32.894 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 12], 00:14:32.894 | 70.00th=[ 14], 80.00th=[ 16], 90.00th=[ 22], 95.00th=[ 42], 00:14:32.894 | 99.00th=[ 62], 99.50th=[ 91], 99.90th=[ 109], 99.95th=[ 109], 00:14:32.894 | 99.99th=[ 109] 00:14:32.894 write: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1044msec); 0 zone resets 00:14:32.894 slat (nsec): min=1701, max=20025k, avg=89591.12, stdev=649593.37 00:14:32.894 clat (msec): min=2, max=118, avg=12.83, stdev=17.53 00:14:32.894 lat (msec): min=2, max=118, avg=12.92, stdev=17.63 00:14:32.894 clat percentiles (msec): 00:14:32.894 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:14:32.894 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:14:32.894 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 15], 95.00th=[ 34], 00:14:32.894 | 99.00th=[ 109], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 118], 00:14:32.894 | 99.99th=[ 118] 00:14:32.894 bw ( KiB/s): min=19760, max=21200, per=24.03%, avg=20480.00, stdev=1018.23, samples=2 00:14:32.894 iops : min= 4940, max= 5300, avg=5120.00, stdev=254.56, samples=2 00:14:32.894 lat (msec) : 2=0.28%, 4=3.17%, 10=53.02%, 20=33.15%, 50=6.58% 00:14:32.894 lat (msec) : 100=2.70%, 250=1.11% 00:14:32.894 cpu : usr=3.16%, sys=6.42%, ctx=338, majf=0, minf=1 00:14:32.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:32.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:32.894 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:32.894 job2: (groupid=0, jobs=1): err= 0: pid=2575332: Wed Nov 20 06:25:52 2024 00:14:32.894 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec) 00:14:32.894 slat (nsec): min=945, max=8178.5k, avg=64637.36, stdev=466971.28 00:14:32.894 clat (usec): min=1282, max=25565, avg=8680.61, stdev=2826.33 00:14:32.894 lat (usec): min=1290, max=25579, avg=8745.25, stdev=2856.97 00:14:32.894 clat percentiles (usec): 00:14:32.894 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 6390], 00:14:32.894 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8717], 00:14:32.894 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[11994], 95.00th=[14222], 00:14:32.894 | 99.00th=[18482], 99.50th=[20317], 99.90th=[21890], 99.95th=[21890], 00:14:32.894 | 99.99th=[25560] 00:14:32.894 write: IOPS=7879, BW=30.8MiB/s (32.3MB/s)(30.9MiB/1004msec); 0 zone resets 00:14:32.894 slat (nsec): min=1578, max=7033.6k, avg=58077.04, stdev=357986.68 00:14:32.894 clat (usec): min=1196, max=17958, avg=7674.53, stdev=2788.72 00:14:32.894 lat (usec): min=1207, max=17960, avg=7732.60, stdev=2808.89 00:14:32.894 clat percentiles (usec): 00:14:32.894 | 1.00th=[ 2999], 5.00th=[ 4047], 10.00th=[ 4948], 20.00th=[ 5932], 00:14:32.894 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7177], 00:14:32.894 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[11994], 95.00th=[13304], 00:14:32.894 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:14:32.894 | 99.99th=[17957] 00:14:32.894 bw ( KiB/s): min=30992, max=31280, per=36.53%, avg=31136.00, stdev=203.65, samples=2 00:14:32.894 iops : min= 7748, max= 7820, avg=7784.00, stdev=50.91, samples=2 00:14:32.894 lat (msec) : 2=0.18%, 4=2.18%, 10=78.38%, 20=18.95%, 50=0.31% 00:14:32.894 cpu : usr=5.78%, sys=7.98%, ctx=638, majf=0, minf=2 00:14:32.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:32.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:32.894 issued rwts: total=7680,7911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:32.894 job3: (groupid=0, jobs=1): err= 0: pid=2575333: Wed Nov 20 06:25:52 2024 00:14:32.894 read: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1008msec) 00:14:32.894 slat (nsec): min=1022, max=14235k, avg=106850.26, stdev=739229.93 00:14:32.894 clat (usec): min=3861, max=62851, avg=13123.29, stdev=6240.03 00:14:32.894 lat (usec): min=3865, max=62859, avg=13230.14, stdev=6303.69 00:14:32.894 clat percentiles (usec): 00:14:32.894 | 1.00th=[ 5735], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 9241], 00:14:32.894 | 30.00th=[10028], 40.00th=[11076], 50.00th=[11994], 60.00th=[12780], 00:14:32.895 | 70.00th=[14222], 80.00th=[15795], 90.00th=[18220], 95.00th=[22676], 00:14:32.895 | 99.00th=[42730], 99.50th=[54789], 99.90th=[62653], 99.95th=[62653], 00:14:32.895 | 99.99th=[62653] 00:14:32.895 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:14:32.895 slat (nsec): min=1753, max=9527.3k, avg=113765.07, stdev=662524.24 00:14:32.895 clat (usec): min=1174, max=86897, avg=15753.69, stdev=15999.96 00:14:32.895 lat (usec): min=1183, max=86906, avg=15867.46, stdev=16095.81 00:14:32.895 clat percentiles (usec): 00:14:32.895 | 1.00th=[ 3884], 5.00th=[ 4621], 10.00th=[ 5997], 20.00th=[ 6521], 00:14:32.895 | 30.00th=[ 7767], 40.00th=[ 9241], 50.00th=[11207], 60.00th=[12387], 00:14:32.895 | 70.00th=[13698], 80.00th=[15926], 90.00th=[38011], 95.00th=[53740], 00:14:32.895 | 99.00th=[81265], 99.50th=[85459], 99.90th=[86508], 99.95th=[86508], 00:14:32.895 | 99.99th=[86508] 00:14:32.895 bw ( KiB/s): min=17744, max=18920, per=21.51%, avg=18332.00, stdev=831.56, samples=2 00:14:32.895 iops : min= 4436, max= 4730, avg=4583.00, stdev=207.89, samples=2 00:14:32.895 lat (msec) : 2=0.09%, 4=0.90%, 10=36.33%, 20=51.85%, 50=7.39% 00:14:32.895 lat (msec) : 100=3.44% 00:14:32.895 cpu : usr=3.67%, sys=5.36%, ctx=372, majf=0, minf=1 00:14:32.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:32.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:32.895 issued rwts: total=4198,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:32.895 00:14:32.895 Run status group 0 (all jobs): 00:14:32.895 READ: bw=78.2MiB/s (82.0MB/s), 16.3MiB/s-29.9MiB/s (17.1MB/s-31.3MB/s), io=81.6MiB (85.6MB), run=1004-1044msec 00:14:32.895 WRITE: bw=83.2MiB/s (87.3MB/s), 17.9MiB/s-30.8MiB/s (18.7MB/s-32.3MB/s), io=86.9MiB (91.1MB), run=1004-1044msec 00:14:32.895 00:14:32.895 Disk stats (read/write): 00:14:32.895 nvme0n1: ios=3093/3349, merge=0/0, ticks=37390/31618, in_queue=69008, util=99.90% 00:14:32.895 nvme0n2: ios=3602/3951, merge=0/0, ticks=46405/54375, in_queue=100780, util=94.20% 00:14:32.895 nvme0n3: ios=6712/6804, merge=0/0, ticks=50688/49100, in_queue=99788, util=92.86% 00:14:32.895 nvme0n4: ios=3627/4015, merge=0/0, ticks=44199/58972, in_queue=103171, util=97.88% 00:14:32.895 06:25:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:32.895 [global] 00:14:32.895 thread=1 00:14:32.895 invalidate=1 00:14:32.895 rw=randwrite 00:14:32.895 time_based=1 00:14:32.895 runtime=1 00:14:32.895 ioengine=libaio 00:14:32.895 direct=1 00:14:32.895 bs=4096 00:14:32.895 iodepth=128 00:14:32.895 norandommap=0 00:14:32.895 numjobs=1 00:14:32.895 00:14:32.895 verify_dump=1 00:14:32.895 verify_backlog=512 00:14:32.895 verify_state_save=0 00:14:32.895 do_verify=1 00:14:32.895 verify=crc32c-intel 00:14:32.895 [job0] 00:14:32.895 filename=/dev/nvme0n1 00:14:32.895 [job1] 00:14:32.895 filename=/dev/nvme0n2 00:14:32.895 [job2] 00:14:32.895 filename=/dev/nvme0n3 00:14:32.895 [job3] 00:14:32.895 filename=/dev/nvme0n4 00:14:32.895 Could not set queue depth (nvme0n1) 00:14:32.895 Could not set queue depth (nvme0n2) 00:14:32.895 Could not set queue depth (nvme0n3) 00:14:32.895 Could not set queue depth (nvme0n4) 00:14:33.159 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:33.159 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:33.159 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:33.159 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:33.159 fio-3.35 00:14:33.159 Starting 4 threads 00:14:34.545 00:14:34.545 job0: (groupid=0, jobs=1): err= 0: pid=2575857: Wed Nov 20 06:25:54 2024 00:14:34.545 read: IOPS=5028, BW=19.6MiB/s (20.6MB/s)(19.8MiB/1007msec) 00:14:34.545 slat (nsec): min=964, max=9179.9k, avg=79760.37, stdev=579830.08 00:14:34.545 clat (usec): min=3303, max=35823, avg=10374.78, stdev=4085.66 00:14:34.545 lat (usec): min=3326, max=35830, avg=10454.54, stdev=4133.77 00:14:34.545 clat percentiles (usec): 00:14:34.545 | 1.00th=[ 5538], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7635], 00:14:34.545 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10028], 00:14:34.545 | 70.00th=[10945], 80.00th=[12518], 90.00th=[15270], 95.00th=[17957], 00:14:34.545 | 99.00th=[26346], 99.50th=[31851], 99.90th=[35390], 99.95th=[35914], 00:14:34.545 | 99.99th=[35914] 00:14:34.545 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:14:34.545 slat (nsec): min=1666, max=12410k, avg=101970.53, stdev=610108.03 00:14:34.545 clat (usec): min=1293, max=64632, avg=14690.97, stdev=12027.08 00:14:34.546 lat (usec): min=1339, max=65503, avg=14792.94, stdev=12103.36 00:14:34.546 clat percentiles (usec): 00:14:34.546 | 1.00th=[ 3556], 5.00th=[ 4686], 10.00th=[ 5997], 20.00th=[ 6587], 00:14:34.546 | 30.00th=[ 6980], 40.00th=[ 7832], 50.00th=[ 8848], 60.00th=[11994], 00:14:34.546 | 70.00th=[15401], 80.00th=[23200], 90.00th=[34341], 95.00th=[38011], 00:14:34.546 | 99.00th=[59507], 99.50th=[62653], 99.90th=[64750], 99.95th=[64750], 00:14:34.546 | 99.99th=[64750] 00:14:34.546 bw ( KiB/s): min=20480, max=20480, per=24.88%, avg=20480.00, stdev= 0.00, samples=2 00:14:34.546 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:14:34.546 lat (msec) : 2=0.01%, 4=0.91%, 10=57.41%, 20=28.56%, 50=12.10% 00:14:34.546 lat (msec) : 100=1.00% 00:14:34.546 cpu : usr=4.47%, sys=5.67%, ctx=368, majf=0, minf=2 00:14:34.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:34.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:34.546 issued rwts: total=5064,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:34.546 job1: (groupid=0, jobs=1): err= 0: pid=2575858: Wed Nov 20 06:25:54 2024 00:14:34.546 read: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1007msec) 00:14:34.546 slat (nsec): min=976, max=20143k, avg=136103.80, stdev=962726.75 00:14:34.546 clat (usec): min=1082, max=76956, avg=14491.70, stdev=9986.32 00:14:34.546 lat (usec): min=4352, max=76964, avg=14627.80, stdev=10087.56 00:14:34.546 clat percentiles (usec): 00:14:34.546 | 1.00th=[ 5407], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[ 8717], 00:14:34.546 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[11469], 60.00th=[12780], 00:14:34.546 | 70.00th=[13960], 80.00th=[16057], 90.00th=[24511], 95.00th=[37487], 00:14:34.546 | 99.00th=[55313], 99.50th=[68682], 99.90th=[77071], 99.95th=[77071], 00:14:34.546 | 99.99th=[77071] 00:14:34.546 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:14:34.546 slat (nsec): min=1658, max=8781.8k, avg=107087.24, stdev=594476.06 00:14:34.546 clat (usec): min=1251, max=76948, avg=17729.97, stdev=14079.08 00:14:34.546 lat (usec): min=1261, max=76961, avg=17837.06, stdev=14123.15 00:14:34.546 clat percentiles (usec): 00:14:34.546 | 1.00th=[ 3163], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 6587], 00:14:34.546 | 30.00th=[ 8717], 40.00th=[10945], 50.00th=[12780], 60.00th=[14877], 00:14:34.546 | 70.00th=[18744], 80.00th=[28181], 90.00th=[41681], 95.00th=[50070], 00:14:34.546 | 99.00th=[61604], 99.50th=[61604], 99.90th=[66323], 99.95th=[66323], 00:14:34.546 | 99.99th=[77071] 00:14:34.546 bw ( KiB/s): min=16384, max=16384, per=19.90%, avg=16384.00, stdev= 0.00, samples=2 00:14:34.546 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:14:34.546 lat (msec) : 2=0.22%, 4=1.42%, 10=36.07%, 20=40.68%, 50=18.02% 00:14:34.546 lat (msec) : 100=3.59% 00:14:34.546 cpu : usr=3.38%, sys=4.57%, ctx=312, majf=0, minf=1 00:14:34.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:34.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:34.546 issued rwts: total=3807,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:34.546 job2: (groupid=0, jobs=1): err= 0: pid=2575859: Wed Nov 20 06:25:54 2024 00:14:34.546 read: IOPS=6476, BW=25.3MiB/s (26.5MB/s)(26.5MiB/1047msec) 00:14:34.546 slat (nsec): min=960, max=11015k, avg=70268.50, stdev=544587.13 00:14:34.546 clat (usec): min=2524, max=67807, avg=10494.83, stdev=9288.98 00:14:34.546 lat (usec): min=2531, max=68645, avg=10565.10, stdev=9332.42 00:14:34.546 clat percentiles (usec): 00:14:34.546 | 1.00th=[ 4293], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6652], 00:14:34.546 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8455], 00:14:34.546 | 70.00th=[ 9241], 80.00th=[10552], 90.00th=[14877], 95.00th=[27657], 00:14:34.546 | 99.00th=[63701], 99.50th=[65799], 99.90th=[67634], 99.95th=[67634], 00:14:34.546 | 99.99th=[67634] 00:14:34.546 write: IOPS=6846, BW=26.7MiB/s (28.0MB/s)(28.0MiB/1047msec); 0 zone resets 00:14:34.546 slat (nsec): min=1521, max=13247k, avg=62980.88, stdev=436327.51 00:14:34.546 clat (usec): min=860, max=69742, avg=8606.83, stdev=8237.37 00:14:34.546 lat (usec): min=870, max=72094, avg=8669.81, stdev=8300.01 00:14:34.546 clat percentiles (usec): 00:14:34.546 | 1.00th=[ 2409], 5.00th=[ 3654], 10.00th=[ 4293], 20.00th=[ 5342], 00:14:34.546 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7373], 60.00th=[ 7898], 00:14:34.546 | 70.00th=[ 8094], 80.00th=[ 8979], 90.00th=[10945], 95.00th=[13566], 00:14:34.546 | 99.00th=[62653], 99.50th=[66847], 99.90th=[68682], 99.95th=[69731], 00:14:34.546 | 99.99th=[69731] 00:14:34.546 bw ( KiB/s): min=24696, max=32624, per=34.81%, avg=28660.00, stdev=5605.94, samples=2 00:14:34.546 iops : min= 6174, max= 8156, avg=7165.00, stdev=1401.49, samples=2 00:14:34.546 lat (usec) : 1000=0.06% 00:14:34.546 lat (msec) : 2=0.16%, 4=3.38%, 10=78.26%, 20=13.15%, 50=3.00% 00:14:34.546 lat (msec) : 100=1.99% 00:14:34.546 cpu : usr=4.30%, sys=7.36%, ctx=634, majf=0, minf=1 00:14:34.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:34.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:34.546 issued rwts: total=6781,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:34.546 job3: (groupid=0, jobs=1): err= 0: pid=2575860: Wed Nov 20 06:25:54 2024 00:14:34.546 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:14:34.546 slat (nsec): min=980, max=11549k, avg=94874.91, stdev=680659.42 00:14:34.546 clat (usec): min=3374, max=46148, avg=12352.59, stdev=6344.04 00:14:34.546 lat (usec): min=3381, max=46174, avg=12447.46, stdev=6408.48 00:14:34.546 clat percentiles (usec): 00:14:34.546 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 8029], 00:14:34.546 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11076], 00:14:34.546 | 70.00th=[13173], 80.00th=[15270], 90.00th=[21890], 95.00th=[26608], 00:14:34.546 | 99.00th=[33162], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:34.546 | 99.99th=[46400] 00:14:34.546 write: IOPS=5145, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1004msec); 0 zone resets 00:14:34.546 slat (nsec): min=1628, max=13750k, avg=84243.86, stdev=649729.48 00:14:34.546 clat (usec): min=695, max=98834, avg=12398.33, stdev=12978.21 00:14:34.546 lat (usec): min=724, max=98841, avg=12482.58, stdev=13066.57 00:14:34.546 clat percentiles (usec): 00:14:34.546 | 1.00th=[ 3490], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 6390], 00:14:34.546 | 30.00th=[ 7242], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 9765], 00:14:34.546 | 70.00th=[12125], 80.00th=[14091], 90.00th=[17433], 95.00th=[30016], 00:14:34.546 | 99.00th=[84411], 99.50th=[91751], 99.90th=[99091], 99.95th=[99091], 00:14:34.546 | 99.99th=[99091] 00:14:34.546 bw ( KiB/s): min=16384, max=24576, per=24.88%, avg=20480.00, stdev=5792.62, samples=2 00:14:34.546 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:14:34.546 lat (usec) : 750=0.03% 00:14:34.546 lat (msec) : 2=0.23%, 4=1.12%, 10=54.48%, 20=34.42%, 50=8.02% 00:14:34.546 lat (msec) : 100=1.70% 00:14:34.546 cpu : usr=4.69%, sys=4.89%, ctx=326, majf=0, minf=1 00:14:34.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:34.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:34.546 issued rwts: total=5120,5166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:34.546 00:14:34.546 Run status group 0 (all jobs): 00:14:34.546 READ: bw=77.5MiB/s (81.3MB/s), 14.8MiB/s-25.3MiB/s (15.5MB/s-26.5MB/s), io=81.1MiB (85.1MB), run=1004-1047msec 00:14:34.546 WRITE: bw=80.4MiB/s (84.3MB/s), 15.9MiB/s-26.7MiB/s (16.7MB/s-28.0MB/s), io=84.2MiB (88.3MB), run=1004-1047msec 00:14:34.546 00:14:34.546 Disk stats (read/write): 00:14:34.546 nvme0n1: ios=4058/4096, merge=0/0, ticks=40837/60907, in_queue=101744, util=97.29% 00:14:34.546 nvme0n2: ios=3332/3584, merge=0/0, ticks=39471/53976, in_queue=93447, util=91.75% 00:14:34.546 nvme0n3: ios=5632/5830, merge=0/0, ticks=44185/42646, in_queue=86831, util=88.33% 00:14:34.546 nvme0n4: ios=3608/4096, merge=0/0, ticks=31621/37939, in_queue=69560, util=97.34% 00:14:34.546 06:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:34.546 06:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2576086 00:14:34.546 06:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:34.546 06:25:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:34.546 [global] 00:14:34.546 thread=1 00:14:34.546 invalidate=1 00:14:34.546 rw=read 00:14:34.546 time_based=1 00:14:34.546 runtime=10 00:14:34.546 ioengine=libaio 00:14:34.546 direct=1 00:14:34.546 bs=4096 00:14:34.546 iodepth=1 00:14:34.546 norandommap=1 00:14:34.546 numjobs=1 00:14:34.546 00:14:34.546 [job0] 00:14:34.546 filename=/dev/nvme0n1 00:14:34.546 [job1] 00:14:34.546 filename=/dev/nvme0n2 00:14:34.546 [job2] 00:14:34.546 filename=/dev/nvme0n3 00:14:34.546 [job3] 00:14:34.546 filename=/dev/nvme0n4 00:14:34.546 Could not set queue depth (nvme0n1) 00:14:34.546 Could not set queue depth (nvme0n2) 00:14:34.547 Could not set queue depth (nvme0n3) 00:14:34.547 Could not set queue depth (nvme0n4) 00:14:34.823 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.823 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.823 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.823 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:34.823 fio-3.35 00:14:34.823 Starting 4 threads 00:14:37.367 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:37.627 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:37.627 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=262144, buflen=4096 00:14:37.627 fio: pid=2576388, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:37.889 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:37.889 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:37.889 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=6635520, buflen=4096 00:14:37.889 fio: pid=2576383, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:37.889 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10850304, buflen=4096 00:14:37.889 fio: pid=2576380, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:37.889 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:37.889 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:38.150 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:38.150 06:25:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:38.150 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=774144, buflen=4096 00:14:38.150 fio: pid=2576381, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:38.150 00:14:38.150 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2576380: Wed Nov 20 06:25:57 2024 00:14:38.150 read: IOPS=892, BW=3570KiB/s (3656kB/s)(10.3MiB/2968msec) 00:14:38.150 slat (usec): min=7, max=12392, avg=34.41, stdev=332.12 00:14:38.150 clat (usec): min=620, max=1792, avg=1071.18, stdev=105.22 00:14:38.150 lat (usec): min=646, max=13495, avg=1105.59, stdev=349.57 00:14:38.150 clat percentiles (usec): 00:14:38.150 | 1.00th=[ 758], 5.00th=[ 848], 10.00th=[ 930], 20.00th=[ 1004], 00:14:38.150 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:14:38.150 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1205], 00:14:38.150 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1319], 99.95th=[ 1565], 00:14:38.150 | 99.99th=[ 1795] 00:14:38.150 bw ( KiB/s): min= 3608, max= 3664, per=63.62%, avg=3628.80, stdev=23.05, samples=5 00:14:38.150 iops : min= 902, max= 916, avg=907.20, stdev= 5.76, samples=5 00:14:38.150 lat (usec) : 750=0.91%, 1000=18.72% 00:14:38.150 lat (msec) : 2=80.34% 00:14:38.150 cpu : usr=1.04%, sys=2.56%, ctx=2654, majf=0, minf=1 00:14:38.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.150 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.150 issued rwts: total=2650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.150 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2576381: Wed Nov 20 06:25:57 2024 00:14:38.150 read: IOPS=59, BW=238KiB/s (244kB/s)(756KiB/3172msec) 00:14:38.150 slat (usec): min=11, max=14926, avg=177.47, stdev=1455.40 00:14:38.150 clat (usec): min=746, max=42089, avg=16475.44, stdev=19724.43 00:14:38.150 lat (usec): min=758, max=42119, avg=16652.55, stdev=19660.86 00:14:38.150 clat percentiles (usec): 00:14:38.150 | 1.00th=[ 758], 5.00th=[ 906], 10.00th=[ 963], 20.00th=[ 1057], 00:14:38.150 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1221], 60.00th=[ 1385], 00:14:38.150 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:38.150 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:38.150 | 99.99th=[42206] 00:14:38.150 bw ( KiB/s): min= 96, max= 577, per=4.07%, avg=232.17, stdev=215.87, samples=6 00:14:38.150 iops : min= 24, max= 144, avg=58.00, stdev=53.89, samples=6 00:14:38.150 lat (usec) : 750=0.53%, 1000=13.16% 00:14:38.150 lat (msec) : 2=47.37%, 10=0.53%, 20=0.53%, 50=37.37% 00:14:38.150 cpu : usr=0.06%, sys=0.19%, ctx=195, majf=0, minf=2 00:14:38.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.150 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.150 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.150 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2576383: Wed Nov 20 06:25:57 2024 00:14:38.150 read: IOPS=572, BW=2287KiB/s (2342kB/s)(6480KiB/2833msec) 00:14:38.150 slat (usec): min=7, max=15109, avg=41.06, stdev=450.23 00:14:38.150 clat (usec): min=469, max=42154, avg=1684.19, stdev=5015.63 00:14:38.150 lat (usec): min=479, max=42180, avg=1725.26, stdev=5033.91 00:14:38.150 clat percentiles (usec): 00:14:38.150 | 1.00th=[ 717], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 955], 00:14:38.150 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1090], 00:14:38.150 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1270], 00:14:38.150 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:38.150 | 99.99th=[42206] 00:14:38.150 bw ( KiB/s): min= 96, max= 3696, per=43.40%, avg=2475.20, stdev=1399.61, samples=5 00:14:38.150 iops : min= 24, max= 924, avg=618.80, stdev=349.90, samples=5 00:14:38.150 lat (usec) : 500=0.06%, 750=1.91%, 1000=31.40% 00:14:38.150 lat (msec) : 2=64.90%, 10=0.06%, 20=0.06%, 50=1.54% 00:14:38.150 cpu : usr=0.64%, sys=1.69%, ctx=1623, majf=0, minf=2 00:14:38.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.150 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.150 issued rwts: total=1621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.150 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2576388: Wed Nov 20 06:25:57 2024 00:14:38.150 read: IOPS=24, BW=97.2KiB/s (99.6kB/s)(256KiB/2633msec) 00:14:38.150 slat (nsec): min=25308, max=35751, avg=25979.68, stdev=1259.38 00:14:38.150 clat (usec): min=836, max=42056, avg=40762.53, stdev=5092.54 00:14:38.150 lat (usec): min=872, max=42081, avg=40788.52, stdev=5091.30 00:14:38.150 clat percentiles (usec): 00:14:38.150 | 1.00th=[ 840], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:38.150 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:14:38.151 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:38.151 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:38.151 | 99.99th=[42206] 00:14:38.151 bw ( KiB/s): min= 96, max= 104, per=1.70%, avg=97.60, stdev= 3.58, samples=5 00:14:38.151 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:14:38.151 lat (usec) : 1000=1.54% 00:14:38.151 lat (msec) : 50=96.92% 00:14:38.151 cpu : usr=0.11%, sys=0.00%, ctx=65, majf=0, minf=2 00:14:38.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.151 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.151 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.151 00:14:38.151 Run status group 0 (all jobs): 00:14:38.151 READ: bw=5702KiB/s (5839kB/s), 97.2KiB/s-3570KiB/s (99.6kB/s-3656kB/s), io=17.7MiB (18.5MB), run=2633-3172msec 00:14:38.151 00:14:38.151 Disk stats (read/write): 00:14:38.151 nvme0n1: ios=2555/0, merge=0/0, ticks=2675/0, in_queue=2675, util=94.19% 00:14:38.151 nvme0n2: ios=216/0, merge=0/0, ticks=3805/0, in_queue=3805, util=99.19% 00:14:38.151 nvme0n3: ios=1567/0, merge=0/0, ticks=2457/0, in_queue=2457, util=96.07% 00:14:38.151 nvme0n4: ios=63/0, merge=0/0, ticks=2569/0, in_queue=2569, util=96.47% 00:14:38.411 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:38.411 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:38.411 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:38.411 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:38.704 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:38.704 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:38.995 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:38.995 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:38.995 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:38.995 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2576086 00:14:38.995 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:38.995 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:39.281 nvmf hotplug test: fio failed as expected 00:14:39.281 06:25:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.281 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:39.281 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:39.281 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:39.281 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:39.281 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:39.281 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:39.281 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:39.282 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.282 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:39.282 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.282 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.282 rmmod nvme_tcp 00:14:39.282 rmmod nvme_fabrics 00:14:39.282 rmmod nvme_keyring 00:14:39.282 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2572358 ']' 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2572358 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2572358 ']' 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2572358 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2572358 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2572358' 00:14:39.541 killing process with pid 2572358 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2572358 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2572358 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.541 06:25:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.088 00:14:42.088 real 0m29.599s 00:14:42.088 user 2m36.593s 00:14:42.088 sys 0m9.529s 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.088 ************************************ 00:14:42.088 END TEST nvmf_fio_target 00:14:42.088 ************************************ 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:42.088 ************************************ 00:14:42.088 START TEST nvmf_bdevio 00:14:42.088 ************************************ 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:42.088 * Looking for test storage... 00:14:42.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.088 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:42.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.089 --rc genhtml_branch_coverage=1 00:14:42.089 --rc genhtml_function_coverage=1 00:14:42.089 --rc genhtml_legend=1 00:14:42.089 --rc geninfo_all_blocks=1 00:14:42.089 --rc geninfo_unexecuted_blocks=1 00:14:42.089 00:14:42.089 ' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:42.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.089 --rc genhtml_branch_coverage=1 00:14:42.089 --rc genhtml_function_coverage=1 00:14:42.089 --rc genhtml_legend=1 00:14:42.089 --rc geninfo_all_blocks=1 00:14:42.089 --rc geninfo_unexecuted_blocks=1 00:14:42.089 00:14:42.089 ' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:42.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.089 --rc genhtml_branch_coverage=1 00:14:42.089 --rc genhtml_function_coverage=1 00:14:42.089 --rc genhtml_legend=1 00:14:42.089 --rc geninfo_all_blocks=1 00:14:42.089 --rc geninfo_unexecuted_blocks=1 00:14:42.089 00:14:42.089 ' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:42.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.089 --rc genhtml_branch_coverage=1 00:14:42.089 --rc genhtml_function_coverage=1 00:14:42.089 --rc genhtml_legend=1 00:14:42.089 --rc geninfo_all_blocks=1 00:14:42.089 --rc geninfo_unexecuted_blocks=1 00:14:42.089 00:14:42.089 ' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:14:42.089 06:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:50.235 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:50.235 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:50.236 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:50.236 Found net devices under 0000:31:00.0: cvl_0_0 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:50.236 Found net devices under 0000:31:00.1: cvl_0_1 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.236 06:26:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:50.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:14:50.236 00:14:50.236 --- 10.0.0.2 ping statistics --- 00:14:50.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.236 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:14:50.236 00:14:50.236 --- 10.0.0.1 ping statistics --- 00:14:50.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.236 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2581459 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2581459 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2581459 ']' 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.236 06:26:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.236 [2024-11-20 06:26:09.387407] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:14:50.236 [2024-11-20 06:26:09.387471] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.236 [2024-11-20 06:26:09.490464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.236 [2024-11-20 06:26:09.541605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.236 [2024-11-20 06:26:09.541655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.236 [2024-11-20 06:26:09.541664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.236 [2024-11-20 06:26:09.541671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.236 [2024-11-20 06:26:09.541678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.236 [2024-11-20 06:26:09.543849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:50.236 [2024-11-20 06:26:09.544170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:50.236 [2024-11-20 06:26:09.544330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:50.236 [2024-11-20 06:26:09.544331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.498 [2024-11-20 06:26:10.273635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.498 Malloc0 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.498 [2024-11-20 06:26:10.347804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:50.498 { 00:14:50.498 "params": { 00:14:50.498 "name": "Nvme$subsystem", 00:14:50.498 "trtype": "$TEST_TRANSPORT", 00:14:50.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:50.498 "adrfam": "ipv4", 00:14:50.498 "trsvcid": "$NVMF_PORT", 00:14:50.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:50.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:50.498 "hdgst": ${hdgst:-false}, 00:14:50.498 "ddgst": ${ddgst:-false} 00:14:50.498 }, 00:14:50.498 "method": "bdev_nvme_attach_controller" 00:14:50.498 } 00:14:50.498 EOF 00:14:50.498 )") 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:50.498 06:26:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:50.498 "params": { 00:14:50.498 "name": "Nvme1", 00:14:50.498 "trtype": "tcp", 00:14:50.498 "traddr": "10.0.0.2", 00:14:50.498 "adrfam": "ipv4", 00:14:50.498 "trsvcid": "4420", 00:14:50.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.498 "hdgst": false, 00:14:50.498 "ddgst": false 00:14:50.498 }, 00:14:50.498 "method": "bdev_nvme_attach_controller" 00:14:50.498 }' 00:14:50.498 [2024-11-20 06:26:10.413250] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:14:50.498 [2024-11-20 06:26:10.413321] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581812 ] 00:14:50.760 [2024-11-20 06:26:10.507827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.760 [2024-11-20 06:26:10.563793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.760 [2024-11-20 06:26:10.563877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.760 [2024-11-20 06:26:10.563876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.021 I/O targets: 00:14:51.021 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:51.021 00:14:51.021 00:14:51.021 CUnit - A unit testing framework for C - Version 2.1-3 00:14:51.021 http://cunit.sourceforge.net/ 00:14:51.021 00:14:51.021 00:14:51.021 Suite: bdevio tests on: Nvme1n1 00:14:51.282 Test: blockdev write read block ...passed 00:14:51.282 Test: blockdev write zeroes read block ...passed 00:14:51.282 Test: blockdev write zeroes read no split ...passed 00:14:51.282 Test: blockdev write zeroes read split ...passed 00:14:51.282 Test: blockdev write zeroes read split partial ...passed 00:14:51.282 Test: blockdev reset ...[2024-11-20 06:26:11.058128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:51.282 [2024-11-20 06:26:11.058237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19921c0 (9): Bad file descriptor 00:14:51.282 [2024-11-20 06:26:11.078313] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:51.282 passed 00:14:51.282 Test: blockdev write read 8 blocks ...passed 00:14:51.282 Test: blockdev write read size > 128k ...passed 00:14:51.282 Test: blockdev write read invalid size ...passed 00:14:51.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:51.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:51.282 Test: blockdev write read max offset ...passed 00:14:51.544 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:51.544 Test: blockdev writev readv 8 blocks ...passed 00:14:51.544 Test: blockdev writev readv 30 x 1block ...passed 00:14:51.544 Test: blockdev writev readv block ...passed 00:14:51.544 Test: blockdev writev readv size > 128k ...passed 00:14:51.544 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:51.544 Test: blockdev comparev and writev ...[2024-11-20 06:26:11.301447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.301507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.301525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.301535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.302132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.302147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.302162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.302171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.302759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.302773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.302787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.302795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.303377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.303391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.303406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:51.544 [2024-11-20 06:26:11.303413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:51.544 passed 00:14:51.544 Test: blockdev nvme passthru rw ...passed 00:14:51.544 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:26:11.387596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:51.544 [2024-11-20 06:26:11.387614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.387914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:51.544 [2024-11-20 06:26:11.387930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.388393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:51.544 [2024-11-20 06:26:11.388407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:51.544 [2024-11-20 06:26:11.388853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:51.544 [2024-11-20 06:26:11.388868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:51.544 passed 00:14:51.544 Test: blockdev nvme admin passthru ...passed 00:14:51.544 Test: blockdev copy ...passed 00:14:51.544 00:14:51.544 Run Summary: Type Total Ran Passed Failed Inactive 00:14:51.544 suites 1 1 n/a 0 0 00:14:51.544 tests 23 23 23 0 0 00:14:51.544 asserts 152 152 152 0 n/a 00:14:51.544 00:14:51.544 Elapsed time = 1.113 seconds 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.805 rmmod nvme_tcp 00:14:51.805 rmmod nvme_fabrics 00:14:51.805 rmmod nvme_keyring 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2581459 ']' 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2581459 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2581459 ']' 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2581459 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.805 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2581459 00:14:51.806 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:14:51.806 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:14:51.806 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2581459' 00:14:51.806 killing process with pid 2581459 00:14:51.806 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2581459 00:14:51.806 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2581459 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.067 06:26:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.616 06:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:54.616 00:14:54.616 real 0m12.378s 00:14:54.616 user 0m13.681s 00:14:54.616 sys 0m6.372s 00:14:54.616 06:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.616 06:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:54.616 ************************************ 00:14:54.616 END TEST nvmf_bdevio 00:14:54.616 ************************************ 00:14:54.616 06:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:54.616 00:14:54.616 real 5m7.416s 00:14:54.616 user 11m54.960s 00:14:54.616 sys 1m53.601s 00:14:54.616 06:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.616 06:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:54.616 ************************************ 00:14:54.616 END TEST nvmf_target_core 00:14:54.616 ************************************ 00:14:54.616 06:26:13 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:54.616 06:26:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:54.616 06:26:14 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.616 06:26:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:54.616 ************************************ 00:14:54.616 START TEST nvmf_target_extra 00:14:54.616 ************************************ 00:14:54.616 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:54.616 * Looking for test storage... 00:14:54.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:54.616 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.617 --rc genhtml_branch_coverage=1 00:14:54.617 --rc genhtml_function_coverage=1 00:14:54.617 --rc genhtml_legend=1 00:14:54.617 --rc geninfo_all_blocks=1 00:14:54.617 --rc geninfo_unexecuted_blocks=1 00:14:54.617 00:14:54.617 ' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.617 --rc genhtml_branch_coverage=1 00:14:54.617 --rc genhtml_function_coverage=1 00:14:54.617 --rc genhtml_legend=1 00:14:54.617 --rc geninfo_all_blocks=1 00:14:54.617 --rc geninfo_unexecuted_blocks=1 00:14:54.617 00:14:54.617 ' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.617 --rc genhtml_branch_coverage=1 00:14:54.617 --rc genhtml_function_coverage=1 00:14:54.617 --rc genhtml_legend=1 00:14:54.617 --rc geninfo_all_blocks=1 00:14:54.617 --rc geninfo_unexecuted_blocks=1 00:14:54.617 00:14:54.617 ' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.617 --rc genhtml_branch_coverage=1 00:14:54.617 --rc genhtml_function_coverage=1 00:14:54.617 --rc genhtml_legend=1 00:14:54.617 --rc geninfo_all_blocks=1 00:14:54.617 --rc geninfo_unexecuted_blocks=1 00:14:54.617 00:14:54.617 ' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:54.617 ************************************ 00:14:54.617 START TEST nvmf_example 00:14:54.617 ************************************ 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:54.617 * Looking for test storage... 00:14:54.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.617 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.618 --rc genhtml_branch_coverage=1 00:14:54.618 --rc genhtml_function_coverage=1 00:14:54.618 --rc genhtml_legend=1 00:14:54.618 --rc geninfo_all_blocks=1 00:14:54.618 --rc geninfo_unexecuted_blocks=1 00:14:54.618 00:14:54.618 ' 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.618 --rc genhtml_branch_coverage=1 00:14:54.618 --rc genhtml_function_coverage=1 00:14:54.618 --rc genhtml_legend=1 00:14:54.618 --rc geninfo_all_blocks=1 00:14:54.618 --rc geninfo_unexecuted_blocks=1 00:14:54.618 00:14:54.618 ' 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.618 --rc genhtml_branch_coverage=1 00:14:54.618 --rc genhtml_function_coverage=1 00:14:54.618 --rc genhtml_legend=1 00:14:54.618 --rc geninfo_all_blocks=1 00:14:54.618 --rc geninfo_unexecuted_blocks=1 00:14:54.618 00:14:54.618 ' 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.618 --rc genhtml_branch_coverage=1 00:14:54.618 --rc genhtml_function_coverage=1 00:14:54.618 --rc genhtml_legend=1 00:14:54.618 --rc geninfo_all_blocks=1 00:14:54.618 --rc geninfo_unexecuted_blocks=1 00:14:54.618 00:14:54.618 ' 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.618 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:54.880 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.020 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:03.021 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:03.021 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:03.021 Found net devices under 0000:31:00.0: cvl_0_0 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:03.021 Found net devices under 0000:31:00.1: cvl_0_1 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.021 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:03.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:15:03.021 00:15:03.021 --- 10.0.0.2 ping statistics --- 00:15:03.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.021 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:03.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:15:03.021 00:15:03.021 --- 10.0.0.1 ping statistics --- 00:15:03.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.021 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:03.021 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2586395 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2586395 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2586395 ']' 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:03.022 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.283 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:03.544 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:15.778 Initializing NVMe Controllers 00:15:15.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:15.778 Initialization complete. Launching workers. 00:15:15.778 ======================================================== 00:15:15.778 Latency(us) 00:15:15.778 Device Information : IOPS MiB/s Average min max 00:15:15.778 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18810.30 73.48 3402.87 629.08 16371.17 00:15:15.778 ======================================================== 00:15:15.778 Total : 18810.30 73.48 3402.87 629.08 16371.17 00:15:15.778 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:15.778 rmmod nvme_tcp 00:15:15.778 rmmod nvme_fabrics 00:15:15.778 rmmod nvme_keyring 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2586395 ']' 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2586395 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2586395 ']' 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2586395 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2586395 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2586395' 00:15:15.778 killing process with pid 2586395 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2586395 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2586395 00:15:15.778 nvmf threads initialize successfully 00:15:15.778 bdev subsystem init successfully 00:15:15.778 created a nvmf target service 00:15:15.778 create targets's poll groups done 00:15:15.778 all subsystems of target started 00:15:15.778 nvmf target is running 00:15:15.778 all subsystems of target stopped 00:15:15.778 destroy targets's poll groups done 00:15:15.778 destroyed the nvmf target service 00:15:15.778 bdev subsystem finish successfully 00:15:15.778 nvmf threads destroy successfully 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.778 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.039 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:16.039 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:16.039 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:16.039 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:16.039 00:15:16.039 real 0m21.625s 00:15:16.039 user 0m46.850s 00:15:16.039 sys 0m7.131s 00:15:16.039 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:16.039 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:16.039 ************************************ 00:15:16.039 END TEST nvmf_example 00:15:16.040 ************************************ 00:15:16.301 06:26:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:16.301 06:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:16.301 06:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:16.301 06:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.301 ************************************ 00:15:16.301 START TEST nvmf_filesystem 00:15:16.301 ************************************ 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:16.301 * Looking for test storage... 00:15:16.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:16.301 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:16.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.566 --rc genhtml_branch_coverage=1 00:15:16.566 --rc genhtml_function_coverage=1 00:15:16.566 --rc genhtml_legend=1 00:15:16.566 --rc geninfo_all_blocks=1 00:15:16.566 --rc geninfo_unexecuted_blocks=1 00:15:16.566 00:15:16.566 ' 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:16.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.566 --rc genhtml_branch_coverage=1 00:15:16.566 --rc genhtml_function_coverage=1 00:15:16.566 --rc genhtml_legend=1 00:15:16.566 --rc geninfo_all_blocks=1 00:15:16.566 --rc geninfo_unexecuted_blocks=1 00:15:16.566 00:15:16.566 ' 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:16.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.566 --rc genhtml_branch_coverage=1 00:15:16.566 --rc genhtml_function_coverage=1 00:15:16.566 --rc genhtml_legend=1 00:15:16.566 --rc geninfo_all_blocks=1 00:15:16.566 --rc geninfo_unexecuted_blocks=1 00:15:16.566 00:15:16.566 ' 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:16.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.566 --rc genhtml_branch_coverage=1 00:15:16.566 --rc genhtml_function_coverage=1 00:15:16.566 --rc genhtml_legend=1 00:15:16.566 --rc geninfo_all_blocks=1 00:15:16.566 --rc geninfo_unexecuted_blocks=1 00:15:16.566 00:15:16.566 ' 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:16.566 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:16.567 #define SPDK_CONFIG_H 00:15:16.567 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:16.567 #define SPDK_CONFIG_APPS 1 00:15:16.567 #define SPDK_CONFIG_ARCH native 00:15:16.567 #undef SPDK_CONFIG_ASAN 00:15:16.567 #undef SPDK_CONFIG_AVAHI 00:15:16.567 #undef SPDK_CONFIG_CET 00:15:16.567 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:16.567 #define SPDK_CONFIG_COVERAGE 1 00:15:16.567 #define SPDK_CONFIG_CROSS_PREFIX 00:15:16.567 #undef SPDK_CONFIG_CRYPTO 00:15:16.567 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:16.567 #undef SPDK_CONFIG_CUSTOMOCF 00:15:16.567 #undef SPDK_CONFIG_DAOS 00:15:16.567 #define SPDK_CONFIG_DAOS_DIR 00:15:16.567 #define SPDK_CONFIG_DEBUG 1 00:15:16.567 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:16.567 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:16.567 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:16.567 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:16.567 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:16.567 #undef SPDK_CONFIG_DPDK_UADK 00:15:16.567 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:16.567 #define SPDK_CONFIG_EXAMPLES 1 00:15:16.567 #undef SPDK_CONFIG_FC 00:15:16.567 #define SPDK_CONFIG_FC_PATH 00:15:16.567 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:16.567 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:16.567 #define SPDK_CONFIG_FSDEV 1 00:15:16.567 #undef SPDK_CONFIG_FUSE 00:15:16.567 #undef SPDK_CONFIG_FUZZER 00:15:16.567 #define SPDK_CONFIG_FUZZER_LIB 00:15:16.567 #undef SPDK_CONFIG_GOLANG 00:15:16.567 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:16.567 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:16.567 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:16.567 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:16.567 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:16.567 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:16.567 #undef SPDK_CONFIG_HAVE_LZ4 00:15:16.567 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:16.567 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:16.567 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:16.567 #define SPDK_CONFIG_IDXD 1 00:15:16.567 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:16.567 #undef SPDK_CONFIG_IPSEC_MB 00:15:16.567 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:16.567 #define SPDK_CONFIG_ISAL 1 00:15:16.567 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:16.567 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:16.567 #define SPDK_CONFIG_LIBDIR 00:15:16.567 #undef SPDK_CONFIG_LTO 00:15:16.567 #define SPDK_CONFIG_MAX_LCORES 128 00:15:16.567 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:16.567 #define SPDK_CONFIG_NVME_CUSE 1 00:15:16.567 #undef SPDK_CONFIG_OCF 00:15:16.567 #define SPDK_CONFIG_OCF_PATH 00:15:16.567 #define SPDK_CONFIG_OPENSSL_PATH 00:15:16.567 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:16.567 #define SPDK_CONFIG_PGO_DIR 00:15:16.567 #undef SPDK_CONFIG_PGO_USE 00:15:16.567 #define SPDK_CONFIG_PREFIX /usr/local 00:15:16.567 #undef SPDK_CONFIG_RAID5F 00:15:16.567 #undef SPDK_CONFIG_RBD 00:15:16.567 #define SPDK_CONFIG_RDMA 1 00:15:16.567 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:16.567 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:16.567 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:16.567 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:16.567 #define SPDK_CONFIG_SHARED 1 00:15:16.567 #undef SPDK_CONFIG_SMA 00:15:16.567 #define SPDK_CONFIG_TESTS 1 00:15:16.567 #undef SPDK_CONFIG_TSAN 00:15:16.567 #define SPDK_CONFIG_UBLK 1 00:15:16.567 #define SPDK_CONFIG_UBSAN 1 00:15:16.567 #undef SPDK_CONFIG_UNIT_TESTS 00:15:16.567 #undef SPDK_CONFIG_URING 00:15:16.567 #define SPDK_CONFIG_URING_PATH 00:15:16.567 #undef SPDK_CONFIG_URING_ZNS 00:15:16.567 #undef SPDK_CONFIG_USDT 00:15:16.567 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:16.567 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:16.567 #define SPDK_CONFIG_VFIO_USER 1 00:15:16.567 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:16.567 #define SPDK_CONFIG_VHOST 1 00:15:16.567 #define SPDK_CONFIG_VIRTIO 1 00:15:16.567 #undef SPDK_CONFIG_VTUNE 00:15:16.567 #define SPDK_CONFIG_VTUNE_DIR 00:15:16.567 #define SPDK_CONFIG_WERROR 1 00:15:16.567 #define SPDK_CONFIG_WPDK_DIR 00:15:16.567 #undef SPDK_CONFIG_XNVME 00:15:16.567 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.567 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:16.568 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:16.569 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2589208 ]] 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2589208 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.33OB8Q 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.33OB8Q/tests/target /tmp/spdk.33OB8Q 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122536079360 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356517376 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6820438016 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668225536 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678256640 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847934976 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23371776 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:15:16.570 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=387072 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=116736 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677748736 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678260736 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=512000 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:15:16.571 * Looking for test storage... 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122536079360 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9035030528 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.571 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:16.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.833 --rc genhtml_branch_coverage=1 00:15:16.833 --rc genhtml_function_coverage=1 00:15:16.833 --rc genhtml_legend=1 00:15:16.833 --rc geninfo_all_blocks=1 00:15:16.833 --rc geninfo_unexecuted_blocks=1 00:15:16.833 00:15:16.833 ' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:16.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.833 --rc genhtml_branch_coverage=1 00:15:16.833 --rc genhtml_function_coverage=1 00:15:16.833 --rc genhtml_legend=1 00:15:16.833 --rc geninfo_all_blocks=1 00:15:16.833 --rc geninfo_unexecuted_blocks=1 00:15:16.833 00:15:16.833 ' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:16.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.833 --rc genhtml_branch_coverage=1 00:15:16.833 --rc genhtml_function_coverage=1 00:15:16.833 --rc genhtml_legend=1 00:15:16.833 --rc geninfo_all_blocks=1 00:15:16.833 --rc geninfo_unexecuted_blocks=1 00:15:16.833 00:15:16.833 ' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:16.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.833 --rc genhtml_branch_coverage=1 00:15:16.833 --rc genhtml_function_coverage=1 00:15:16.833 --rc genhtml_legend=1 00:15:16.833 --rc geninfo_all_blocks=1 00:15:16.833 --rc geninfo_unexecuted_blocks=1 00:15:16.833 00:15:16.833 ' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:16.833 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:24.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:24.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:24.978 Found net devices under 0000:31:00.0: cvl_0_0 00:15:24.978 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:24.979 Found net devices under 0000:31:00.1: cvl_0_1 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.979 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:24.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:15:24.979 00:15:24.979 --- 10.0.0.2 ping statistics --- 00:15:24.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.979 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:15:24.979 00:15:24.979 --- 10.0.0.1 ping statistics --- 00:15:24.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.979 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:24.979 ************************************ 00:15:24.979 START TEST nvmf_filesystem_no_in_capsule 00:15:24.979 ************************************ 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2593039 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2593039 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2593039 ']' 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:24.979 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:24.979 [2024-11-20 06:26:44.350995] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:15:24.979 [2024-11-20 06:26:44.351061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.979 [2024-11-20 06:26:44.437837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.979 [2024-11-20 06:26:44.490127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.979 [2024-11-20 06:26:44.490179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.979 [2024-11-20 06:26:44.490189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.979 [2024-11-20 06:26:44.490197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.979 [2024-11-20 06:26:44.490203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.979 [2024-11-20 06:26:44.492296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.979 [2024-11-20 06:26:44.492455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.979 [2024-11-20 06:26:44.492618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.979 [2024-11-20 06:26:44.492619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.552 [2024-11-20 06:26:45.226602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.552 Malloc1 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.552 [2024-11-20 06:26:45.393087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.552 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:25.552 { 00:15:25.552 "name": "Malloc1", 00:15:25.552 "aliases": [ 00:15:25.552 "cbe5142d-68ab-4ed7-bd52-515fc93cee2c" 00:15:25.553 ], 00:15:25.553 "product_name": "Malloc disk", 00:15:25.553 "block_size": 512, 00:15:25.553 "num_blocks": 1048576, 00:15:25.553 "uuid": "cbe5142d-68ab-4ed7-bd52-515fc93cee2c", 00:15:25.553 "assigned_rate_limits": { 00:15:25.553 "rw_ios_per_sec": 0, 00:15:25.553 "rw_mbytes_per_sec": 0, 00:15:25.553 "r_mbytes_per_sec": 0, 00:15:25.553 "w_mbytes_per_sec": 0 00:15:25.553 }, 00:15:25.553 "claimed": true, 00:15:25.553 "claim_type": "exclusive_write", 00:15:25.553 "zoned": false, 00:15:25.553 "supported_io_types": { 00:15:25.553 "read": true, 00:15:25.553 "write": true, 00:15:25.553 "unmap": true, 00:15:25.553 "flush": true, 00:15:25.553 "reset": true, 00:15:25.553 "nvme_admin": false, 00:15:25.553 "nvme_io": false, 00:15:25.553 "nvme_io_md": false, 00:15:25.553 "write_zeroes": true, 00:15:25.553 "zcopy": true, 00:15:25.553 "get_zone_info": false, 00:15:25.553 "zone_management": false, 00:15:25.553 "zone_append": false, 00:15:25.553 "compare": false, 00:15:25.553 "compare_and_write": false, 00:15:25.553 "abort": true, 00:15:25.553 "seek_hole": false, 00:15:25.553 "seek_data": false, 00:15:25.553 "copy": true, 00:15:25.553 "nvme_iov_md": false 00:15:25.553 }, 00:15:25.553 "memory_domains": [ 00:15:25.553 { 00:15:25.553 "dma_device_id": "system", 00:15:25.553 "dma_device_type": 1 00:15:25.553 }, 00:15:25.553 { 00:15:25.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.553 "dma_device_type": 2 00:15:25.553 } 00:15:25.553 ], 00:15:25.553 "driver_specific": {} 00:15:25.553 } 00:15:25.553 ]' 00:15:25.553 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:25.813 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:15:25.813 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:25.814 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:15:25.814 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:15:25.814 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:15:25.814 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:25.814 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:27.226 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:27.226 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:15:27.226 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.226 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:27.226 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:15:29.135 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:29.135 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:29.135 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:29.396 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:29.656 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:29.916 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:31.301 ************************************ 00:15:31.301 START TEST filesystem_ext4 00:15:31.301 ************************************ 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:15:31.301 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:31.301 mke2fs 1.47.0 (5-Feb-2023) 00:15:31.301 Discarding device blocks: 0/522240 done 00:15:31.301 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:31.301 Filesystem UUID: 8d9619e8-6be9-4c14-aa30-a81bf462f6b2 00:15:31.301 Superblock backups stored on blocks: 00:15:31.301 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:31.301 00:15:31.301 Allocating group tables: 0/64 done 00:15:31.301 Writing inode tables: 0/64 done 00:15:33.844 Creating journal (8192 blocks): done 00:15:33.844 Writing superblocks and filesystem accounting information: 0/64 done 00:15:33.844 00:15:33.844 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:15:33.844 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:39.130 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:39.130 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:39.130 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:39.130 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2593039 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:39.394 00:15:39.394 real 0m8.255s 00:15:39.394 user 0m0.027s 00:15:39.394 sys 0m0.077s 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:39.394 ************************************ 00:15:39.394 END TEST filesystem_ext4 00:15:39.394 ************************************ 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:39.394 ************************************ 00:15:39.394 START TEST filesystem_btrfs 00:15:39.394 ************************************ 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:15:39.394 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:15:39.395 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:15:39.395 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:39.657 btrfs-progs v6.8.1 00:15:39.657 See https://btrfs.readthedocs.io for more information. 00:15:39.657 00:15:39.657 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:39.657 NOTE: several default settings have changed in version 5.15, please make sure 00:15:39.657 this does not affect your deployments: 00:15:39.657 - DUP for metadata (-m dup) 00:15:39.657 - enabled no-holes (-O no-holes) 00:15:39.657 - enabled free-space-tree (-R free-space-tree) 00:15:39.657 00:15:39.657 Label: (null) 00:15:39.657 UUID: db922a4c-7c26-45d8-a4b5-f05dd47afd90 00:15:39.657 Node size: 16384 00:15:39.657 Sector size: 4096 (CPU page size: 4096) 00:15:39.657 Filesystem size: 510.00MiB 00:15:39.657 Block group profiles: 00:15:39.657 Data: single 8.00MiB 00:15:39.657 Metadata: DUP 32.00MiB 00:15:39.657 System: DUP 8.00MiB 00:15:39.657 SSD detected: yes 00:15:39.657 Zoned device: no 00:15:39.657 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:39.657 Checksum: crc32c 00:15:39.657 Number of devices: 1 00:15:39.657 Devices: 00:15:39.657 ID SIZE PATH 00:15:39.657 1 510.00MiB /dev/nvme0n1p1 00:15:39.657 00:15:39.657 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:15:39.657 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2593039 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:40.228 00:15:40.228 real 0m0.766s 00:15:40.228 user 0m0.019s 00:15:40.228 sys 0m0.126s 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:40.228 ************************************ 00:15:40.228 END TEST filesystem_btrfs 00:15:40.228 ************************************ 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.228 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:40.228 ************************************ 00:15:40.228 START TEST filesystem_xfs 00:15:40.228 ************************************ 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:15:40.228 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:40.228 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:40.228 = sectsz=512 attr=2, projid32bit=1 00:15:40.228 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:40.228 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:40.228 data = bsize=4096 blocks=130560, imaxpct=25 00:15:40.228 = sunit=0 swidth=0 blks 00:15:40.228 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:40.228 log =internal log bsize=4096 blocks=16384, version=2 00:15:40.228 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:40.228 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:41.168 Discarding blocks...Done. 00:15:41.168 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:15:41.168 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2593039 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:43.081 00:15:43.081 real 0m2.686s 00:15:43.081 user 0m0.023s 00:15:43.081 sys 0m0.081s 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:43.081 ************************************ 00:15:43.081 END TEST filesystem_xfs 00:15:43.081 ************************************ 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2593039 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2593039 ']' 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2593039 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:43.081 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2593039 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2593039' 00:15:43.342 killing process with pid 2593039 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2593039 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2593039 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:43.342 00:15:43.342 real 0m18.940s 00:15:43.342 user 1m14.810s 00:15:43.342 sys 0m1.460s 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.342 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:43.342 ************************************ 00:15:43.342 END TEST nvmf_filesystem_no_in_capsule 00:15:43.342 ************************************ 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:43.604 ************************************ 00:15:43.604 START TEST nvmf_filesystem_in_capsule 00:15:43.604 ************************************ 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2597053 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2597053 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2597053 ']' 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.604 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:43.604 [2024-11-20 06:27:03.380341] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:15:43.604 [2024-11-20 06:27:03.380396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.604 [2024-11-20 06:27:03.471662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.604 [2024-11-20 06:27:03.503759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.604 [2024-11-20 06:27:03.503788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.604 [2024-11-20 06:27:03.503794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.604 [2024-11-20 06:27:03.503799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.604 [2024-11-20 06:27:03.503803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.604 [2024-11-20 06:27:03.505106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.604 [2024-11-20 06:27:03.505259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.604 [2024-11-20 06:27:03.505390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.604 [2024-11-20 06:27:03.505392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 [2024-11-20 06:27:04.220218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 Malloc1 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:44.605 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.606 [2024-11-20 06:27:04.354214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:15:44.606 { 00:15:44.606 "name": "Malloc1", 00:15:44.606 "aliases": [ 00:15:44.606 "5385b883-ce9f-4480-a177-57c6663fedd4" 00:15:44.606 ], 00:15:44.606 "product_name": "Malloc disk", 00:15:44.606 "block_size": 512, 00:15:44.606 "num_blocks": 1048576, 00:15:44.606 "uuid": "5385b883-ce9f-4480-a177-57c6663fedd4", 00:15:44.606 "assigned_rate_limits": { 00:15:44.606 "rw_ios_per_sec": 0, 00:15:44.606 "rw_mbytes_per_sec": 0, 00:15:44.606 "r_mbytes_per_sec": 0, 00:15:44.606 "w_mbytes_per_sec": 0 00:15:44.606 }, 00:15:44.606 "claimed": true, 00:15:44.606 "claim_type": "exclusive_write", 00:15:44.606 "zoned": false, 00:15:44.606 "supported_io_types": { 00:15:44.606 "read": true, 00:15:44.606 "write": true, 00:15:44.606 "unmap": true, 00:15:44.606 "flush": true, 00:15:44.606 "reset": true, 00:15:44.606 "nvme_admin": false, 00:15:44.606 "nvme_io": false, 00:15:44.606 "nvme_io_md": false, 00:15:44.606 "write_zeroes": true, 00:15:44.606 "zcopy": true, 00:15:44.606 "get_zone_info": false, 00:15:44.606 "zone_management": false, 00:15:44.606 "zone_append": false, 00:15:44.606 "compare": false, 00:15:44.606 "compare_and_write": false, 00:15:44.606 "abort": true, 00:15:44.606 "seek_hole": false, 00:15:44.606 "seek_data": false, 00:15:44.606 "copy": true, 00:15:44.606 "nvme_iov_md": false 00:15:44.606 }, 00:15:44.606 "memory_domains": [ 00:15:44.606 { 00:15:44.606 "dma_device_id": "system", 00:15:44.606 "dma_device_type": 1 00:15:44.606 }, 00:15:44.606 { 00:15:44.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.606 "dma_device_type": 2 00:15:44.606 } 00:15:44.606 ], 00:15:44.606 "driver_specific": {} 00:15:44.606 } 00:15:44.606 ]' 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:44.606 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:46.520 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:46.520 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:15:46.520 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:46.520 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:46.520 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:48.439 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:48.439 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:49.380 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:49.380 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:49.380 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:49.380 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.380 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:49.641 ************************************ 00:15:49.641 START TEST filesystem_in_capsule_ext4 00:15:49.641 ************************************ 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:15:49.641 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:49.641 mke2fs 1.47.0 (5-Feb-2023) 00:15:49.641 Discarding device blocks: 0/522240 done 00:15:49.641 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:49.641 Filesystem UUID: 8d816586-8ba1-4687-b83e-bffe853b7ebb 00:15:49.641 Superblock backups stored on blocks: 00:15:49.641 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:49.641 00:15:49.641 Allocating group tables: 0/64 done 00:15:49.641 Writing inode tables: 0/64 done 00:15:49.641 Creating journal (8192 blocks): done 00:15:49.976 Writing superblocks and filesystem accounting information: 0/64 done 00:15:49.976 00:15:49.976 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:15:49.976 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2597053 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:56.627 00:15:56.627 real 0m6.466s 00:15:56.627 user 0m0.033s 00:15:56.627 sys 0m0.070s 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:56.627 ************************************ 00:15:56.627 END TEST filesystem_in_capsule_ext4 00:15:56.627 ************************************ 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:56.627 ************************************ 00:15:56.627 START TEST filesystem_in_capsule_btrfs 00:15:56.627 ************************************ 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:15:56.627 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:56.627 btrfs-progs v6.8.1 00:15:56.627 See https://btrfs.readthedocs.io for more information. 00:15:56.627 00:15:56.627 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:56.627 NOTE: several default settings have changed in version 5.15, please make sure 00:15:56.627 this does not affect your deployments: 00:15:56.627 - DUP for metadata (-m dup) 00:15:56.627 - enabled no-holes (-O no-holes) 00:15:56.627 - enabled free-space-tree (-R free-space-tree) 00:15:56.627 00:15:56.627 Label: (null) 00:15:56.627 UUID: 0a7c3d25-1a4a-4407-89a8-0b1487ca58c9 00:15:56.627 Node size: 16384 00:15:56.627 Sector size: 4096 (CPU page size: 4096) 00:15:56.627 Filesystem size: 510.00MiB 00:15:56.627 Block group profiles: 00:15:56.628 Data: single 8.00MiB 00:15:56.628 Metadata: DUP 32.00MiB 00:15:56.628 System: DUP 8.00MiB 00:15:56.628 SSD detected: yes 00:15:56.628 Zoned device: no 00:15:56.628 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:56.628 Checksum: crc32c 00:15:56.628 Number of devices: 1 00:15:56.628 Devices: 00:15:56.628 ID SIZE PATH 00:15:56.628 1 510.00MiB /dev/nvme0n1p1 00:15:56.628 00:15:56.628 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:15:56.628 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2597053 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:57.198 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:57.198 00:15:57.198 real 0m1.140s 00:15:57.198 user 0m0.022s 00:15:57.198 sys 0m0.126s 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:57.198 ************************************ 00:15:57.198 END TEST filesystem_in_capsule_btrfs 00:15:57.198 ************************************ 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.198 ************************************ 00:15:57.198 START TEST filesystem_in_capsule_xfs 00:15:57.198 ************************************ 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:15:57.198 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:57.458 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:57.458 = sectsz=512 attr=2, projid32bit=1 00:15:57.458 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:57.458 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:57.458 data = bsize=4096 blocks=130560, imaxpct=25 00:15:57.458 = sunit=0 swidth=0 blks 00:15:57.458 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:57.458 log =internal log bsize=4096 blocks=16384, version=2 00:15:57.458 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:57.458 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:58.398 Discarding blocks...Done. 00:15:58.398 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:15:58.398 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2597053 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:00.941 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:00.941 00:16:00.941 real 0m3.729s 00:16:00.941 user 0m0.031s 00:16:00.941 sys 0m0.074s 00:16:00.942 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:00.942 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:00.942 ************************************ 00:16:00.942 END TEST filesystem_in_capsule_xfs 00:16:00.942 ************************************ 00:16:00.942 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:01.515 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2597053 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2597053 ']' 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2597053 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:01.776 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2597053 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2597053' 00:16:02.037 killing process with pid 2597053 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2597053 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2597053 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:02.037 00:16:02.037 real 0m18.610s 00:16:02.037 user 1m13.593s 00:16:02.037 sys 0m1.386s 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:02.037 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.037 ************************************ 00:16:02.037 END TEST nvmf_filesystem_in_capsule 00:16:02.037 ************************************ 00:16:02.298 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:02.298 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:02.298 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:02.298 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:02.298 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:02.298 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:02.298 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:02.298 rmmod nvme_tcp 00:16:02.298 rmmod nvme_fabrics 00:16:02.298 rmmod nvme_keyring 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.298 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.214 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:04.214 00:16:04.214 real 0m48.104s 00:16:04.214 user 2m30.815s 00:16:04.214 sys 0m8.909s 00:16:04.214 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:04.214 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:04.214 ************************************ 00:16:04.214 END TEST nvmf_filesystem 00:16:04.214 ************************************ 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.477 ************************************ 00:16:04.477 START TEST nvmf_target_discovery 00:16:04.477 ************************************ 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:04.477 * Looking for test storage... 00:16:04.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.477 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.739 --rc genhtml_branch_coverage=1 00:16:04.739 --rc genhtml_function_coverage=1 00:16:04.739 --rc genhtml_legend=1 00:16:04.739 --rc geninfo_all_blocks=1 00:16:04.739 --rc geninfo_unexecuted_blocks=1 00:16:04.739 00:16:04.739 ' 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.739 --rc genhtml_branch_coverage=1 00:16:04.739 --rc genhtml_function_coverage=1 00:16:04.739 --rc genhtml_legend=1 00:16:04.739 --rc geninfo_all_blocks=1 00:16:04.739 --rc geninfo_unexecuted_blocks=1 00:16:04.739 00:16:04.739 ' 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.739 --rc genhtml_branch_coverage=1 00:16:04.739 --rc genhtml_function_coverage=1 00:16:04.739 --rc genhtml_legend=1 00:16:04.739 --rc geninfo_all_blocks=1 00:16:04.739 --rc geninfo_unexecuted_blocks=1 00:16:04.739 00:16:04.739 ' 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:04.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.739 --rc genhtml_branch_coverage=1 00:16:04.739 --rc genhtml_function_coverage=1 00:16:04.739 --rc genhtml_legend=1 00:16:04.739 --rc geninfo_all_blocks=1 00:16:04.739 --rc geninfo_unexecuted_blocks=1 00:16:04.739 00:16:04.739 ' 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.739 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:16:04.740 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:12.886 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:12.886 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:12.886 Found net devices under 0000:31:00.0: cvl_0_0 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:12.886 Found net devices under 0000:31:00.1: cvl_0_1 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:12.886 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:12.887 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:12.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:16:12.887 00:16:12.887 --- 10.0.0.2 ping statistics --- 00:16:12.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.887 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:16:12.887 00:16:12.887 --- 10.0.0.1 ping statistics --- 00:16:12.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.887 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2605500 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2605500 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2605500 ']' 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:12.887 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.887 [2024-11-20 06:27:32.156142] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:16:12.887 [2024-11-20 06:27:32.156212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.887 [2024-11-20 06:27:32.256967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:12.887 [2024-11-20 06:27:32.310057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.887 [2024-11-20 06:27:32.310112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.887 [2024-11-20 06:27:32.310121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.887 [2024-11-20 06:27:32.310128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.887 [2024-11-20 06:27:32.310135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.887 [2024-11-20 06:27:32.312268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.887 [2024-11-20 06:27:32.312425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.887 [2024-11-20 06:27:32.312585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.887 [2024-11-20 06:27:32.312585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.149 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:13.149 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:16:13.149 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:13.149 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:13.149 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.149 [2024-11-20 06:27:33.034669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.149 Null1 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.149 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 [2024-11-20 06:27:33.101958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 Null2 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.411 Null3 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.411 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 Null4 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.412 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:16:13.674 00:16:13.674 Discovery Log Number of Records 6, Generation counter 6 00:16:13.674 =====Discovery Log Entry 0====== 00:16:13.674 trtype: tcp 00:16:13.674 adrfam: ipv4 00:16:13.674 subtype: current discovery subsystem 00:16:13.674 treq: not required 00:16:13.674 portid: 0 00:16:13.674 trsvcid: 4420 00:16:13.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:13.674 traddr: 10.0.0.2 00:16:13.674 eflags: explicit discovery connections, duplicate discovery information 00:16:13.674 sectype: none 00:16:13.674 =====Discovery Log Entry 1====== 00:16:13.674 trtype: tcp 00:16:13.674 adrfam: ipv4 00:16:13.674 subtype: nvme subsystem 00:16:13.674 treq: not required 00:16:13.674 portid: 0 00:16:13.674 trsvcid: 4420 00:16:13.674 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:13.674 traddr: 10.0.0.2 00:16:13.674 eflags: none 00:16:13.674 sectype: none 00:16:13.674 =====Discovery Log Entry 2====== 00:16:13.674 trtype: tcp 00:16:13.674 adrfam: ipv4 00:16:13.674 subtype: nvme subsystem 00:16:13.674 treq: not required 00:16:13.674 portid: 0 00:16:13.674 trsvcid: 4420 00:16:13.674 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:13.674 traddr: 10.0.0.2 00:16:13.674 eflags: none 00:16:13.674 sectype: none 00:16:13.674 =====Discovery Log Entry 3====== 00:16:13.674 trtype: tcp 00:16:13.674 adrfam: ipv4 00:16:13.674 subtype: nvme subsystem 00:16:13.674 treq: not required 00:16:13.674 portid: 0 00:16:13.674 trsvcid: 4420 00:16:13.674 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:13.674 traddr: 10.0.0.2 00:16:13.674 eflags: none 00:16:13.674 sectype: none 00:16:13.674 =====Discovery Log Entry 4====== 00:16:13.674 trtype: tcp 00:16:13.674 adrfam: ipv4 00:16:13.674 subtype: nvme subsystem 00:16:13.674 treq: not required 00:16:13.674 portid: 0 00:16:13.674 trsvcid: 4420 00:16:13.674 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:13.674 traddr: 10.0.0.2 00:16:13.674 eflags: none 00:16:13.674 sectype: none 00:16:13.674 =====Discovery Log Entry 5====== 00:16:13.674 trtype: tcp 00:16:13.674 adrfam: ipv4 00:16:13.674 subtype: discovery subsystem referral 00:16:13.674 treq: not required 00:16:13.674 portid: 0 00:16:13.674 trsvcid: 4430 00:16:13.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:13.674 traddr: 10.0.0.2 00:16:13.674 eflags: none 00:16:13.674 sectype: none 00:16:13.674 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:13.674 Perform nvmf subsystem discovery via RPC 00:16:13.674 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:13.674 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.674 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.674 [ 00:16:13.674 { 00:16:13.674 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:13.674 "subtype": "Discovery", 00:16:13.674 "listen_addresses": [ 00:16:13.674 { 00:16:13.674 "trtype": "TCP", 00:16:13.674 "adrfam": "IPv4", 00:16:13.674 "traddr": "10.0.0.2", 00:16:13.674 "trsvcid": "4420" 00:16:13.674 } 00:16:13.674 ], 00:16:13.674 "allow_any_host": true, 00:16:13.674 "hosts": [] 00:16:13.674 }, 00:16:13.674 { 00:16:13.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.674 "subtype": "NVMe", 00:16:13.674 "listen_addresses": [ 00:16:13.674 { 00:16:13.674 "trtype": "TCP", 00:16:13.674 "adrfam": "IPv4", 00:16:13.674 "traddr": "10.0.0.2", 00:16:13.674 "trsvcid": "4420" 00:16:13.674 } 00:16:13.674 ], 00:16:13.674 "allow_any_host": true, 00:16:13.674 "hosts": [], 00:16:13.674 "serial_number": "SPDK00000000000001", 00:16:13.674 "model_number": "SPDK bdev Controller", 00:16:13.674 "max_namespaces": 32, 00:16:13.674 "min_cntlid": 1, 00:16:13.674 "max_cntlid": 65519, 00:16:13.674 "namespaces": [ 00:16:13.674 { 00:16:13.674 "nsid": 1, 00:16:13.674 "bdev_name": "Null1", 00:16:13.674 "name": "Null1", 00:16:13.674 "nguid": "4FCD075DBAE84D8ABCA04F956C970AE8", 00:16:13.674 "uuid": "4fcd075d-bae8-4d8a-bca0-4f956c970ae8" 00:16:13.674 } 00:16:13.674 ] 00:16:13.674 }, 00:16:13.674 { 00:16:13.674 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:13.674 "subtype": "NVMe", 00:16:13.674 "listen_addresses": [ 00:16:13.674 { 00:16:13.674 "trtype": "TCP", 00:16:13.674 "adrfam": "IPv4", 00:16:13.674 "traddr": "10.0.0.2", 00:16:13.674 "trsvcid": "4420" 00:16:13.674 } 00:16:13.674 ], 00:16:13.674 "allow_any_host": true, 00:16:13.674 "hosts": [], 00:16:13.674 "serial_number": "SPDK00000000000002", 00:16:13.674 "model_number": "SPDK bdev Controller", 00:16:13.674 "max_namespaces": 32, 00:16:13.674 "min_cntlid": 1, 00:16:13.674 "max_cntlid": 65519, 00:16:13.674 "namespaces": [ 00:16:13.674 { 00:16:13.674 "nsid": 1, 00:16:13.674 "bdev_name": "Null2", 00:16:13.674 "name": "Null2", 00:16:13.674 "nguid": "23C9606DD0D14B53B5AB269DF98047DB", 00:16:13.674 "uuid": "23c9606d-d0d1-4b53-b5ab-269df98047db" 00:16:13.674 } 00:16:13.674 ] 00:16:13.674 }, 00:16:13.674 { 00:16:13.674 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:13.674 "subtype": "NVMe", 00:16:13.674 "listen_addresses": [ 00:16:13.674 { 00:16:13.674 "trtype": "TCP", 00:16:13.674 "adrfam": "IPv4", 00:16:13.674 "traddr": "10.0.0.2", 00:16:13.674 "trsvcid": "4420" 00:16:13.674 } 00:16:13.674 ], 00:16:13.674 "allow_any_host": true, 00:16:13.674 "hosts": [], 00:16:13.674 "serial_number": "SPDK00000000000003", 00:16:13.674 "model_number": "SPDK bdev Controller", 00:16:13.674 "max_namespaces": 32, 00:16:13.674 "min_cntlid": 1, 00:16:13.674 "max_cntlid": 65519, 00:16:13.674 "namespaces": [ 00:16:13.674 { 00:16:13.674 "nsid": 1, 00:16:13.674 "bdev_name": "Null3", 00:16:13.674 "name": "Null3", 00:16:13.674 "nguid": "DF0C9862F44B4F53B6803E53BC4BD186", 00:16:13.674 "uuid": "df0c9862-f44b-4f53-b680-3e53bc4bd186" 00:16:13.674 } 00:16:13.674 ] 00:16:13.674 }, 00:16:13.674 { 00:16:13.674 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:13.674 "subtype": "NVMe", 00:16:13.674 "listen_addresses": [ 00:16:13.674 { 00:16:13.674 "trtype": "TCP", 00:16:13.674 "adrfam": "IPv4", 00:16:13.674 "traddr": "10.0.0.2", 00:16:13.674 "trsvcid": "4420" 00:16:13.674 } 00:16:13.674 ], 00:16:13.674 "allow_any_host": true, 00:16:13.674 "hosts": [], 00:16:13.674 "serial_number": "SPDK00000000000004", 00:16:13.674 "model_number": "SPDK bdev Controller", 00:16:13.674 "max_namespaces": 32, 00:16:13.674 "min_cntlid": 1, 00:16:13.674 "max_cntlid": 65519, 00:16:13.674 "namespaces": [ 00:16:13.674 { 00:16:13.674 "nsid": 1, 00:16:13.674 "bdev_name": "Null4", 00:16:13.674 "name": "Null4", 00:16:13.674 "nguid": "AF416C6B3C2146749E64473CCCAE5C58", 00:16:13.674 "uuid": "af416c6b-3c21-4674-9e64-473cccae5c58" 00:16:13.674 } 00:16:13.674 ] 00:16:13.674 } 00:16:13.674 ] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.675 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:13.937 rmmod nvme_tcp 00:16:13.937 rmmod nvme_fabrics 00:16:13.937 rmmod nvme_keyring 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2605500 ']' 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2605500 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2605500 ']' 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2605500 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2605500 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2605500' 00:16:13.937 killing process with pid 2605500 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2605500 00:16:13.937 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2605500 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.198 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:16.745 00:16:16.745 real 0m11.835s 00:16:16.745 user 0m8.885s 00:16:16.745 sys 0m6.292s 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.745 ************************************ 00:16:16.745 END TEST nvmf_target_discovery 00:16:16.745 ************************************ 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.745 ************************************ 00:16:16.745 START TEST nvmf_referrals 00:16:16.745 ************************************ 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:16.745 * Looking for test storage... 00:16:16.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:16.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.745 --rc genhtml_branch_coverage=1 00:16:16.745 --rc genhtml_function_coverage=1 00:16:16.745 --rc genhtml_legend=1 00:16:16.745 --rc geninfo_all_blocks=1 00:16:16.745 --rc geninfo_unexecuted_blocks=1 00:16:16.745 00:16:16.745 ' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:16.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.745 --rc genhtml_branch_coverage=1 00:16:16.745 --rc genhtml_function_coverage=1 00:16:16.745 --rc genhtml_legend=1 00:16:16.745 --rc geninfo_all_blocks=1 00:16:16.745 --rc geninfo_unexecuted_blocks=1 00:16:16.745 00:16:16.745 ' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:16.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.745 --rc genhtml_branch_coverage=1 00:16:16.745 --rc genhtml_function_coverage=1 00:16:16.745 --rc genhtml_legend=1 00:16:16.745 --rc geninfo_all_blocks=1 00:16:16.745 --rc geninfo_unexecuted_blocks=1 00:16:16.745 00:16:16.745 ' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:16.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.745 --rc genhtml_branch_coverage=1 00:16:16.745 --rc genhtml_function_coverage=1 00:16:16.745 --rc genhtml_legend=1 00:16:16.745 --rc geninfo_all_blocks=1 00:16:16.745 --rc geninfo_unexecuted_blocks=1 00:16:16.745 00:16:16.745 ' 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.745 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:16.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:16.746 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:24.886 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:24.886 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:24.886 Found net devices under 0000:31:00.0: cvl_0_0 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:24.886 Found net devices under 0000:31:00.1: cvl_0_1 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.886 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:24.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:16:24.887 00:16:24.887 --- 10.0.0.2 ping statistics --- 00:16:24.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.887 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:16:24.887 00:16:24.887 --- 10.0.0.1 ping statistics --- 00:16:24.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.887 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:24.887 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2610186 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2610186 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2610186 ']' 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:24.887 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:24.887 [2024-11-20 06:27:44.095079] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:16:24.887 [2024-11-20 06:27:44.095144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.887 [2024-11-20 06:27:44.196926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.887 [2024-11-20 06:27:44.249627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.887 [2024-11-20 06:27:44.249681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.887 [2024-11-20 06:27:44.249689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.887 [2024-11-20 06:27:44.249697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.887 [2024-11-20 06:27:44.249703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.887 [2024-11-20 06:27:44.251832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.887 [2024-11-20 06:27:44.252086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.887 [2024-11-20 06:27:44.251919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.887 [2024-11-20 06:27:44.252083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.149 [2024-11-20 06:27:44.969736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.149 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.149 [2024-11-20 06:27:44.997036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.149 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.410 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:25.671 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:25.932 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:26.194 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:26.194 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:26.194 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:26.194 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:26.194 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:26.194 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:26.194 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:26.194 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:26.194 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:26.194 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:26.194 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:26.194 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:26.194 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:26.455 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:26.715 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:26.715 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:26.716 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:26.716 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:26.716 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:26.716 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:26.716 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:26.977 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:27.238 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:27.238 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:27.238 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:27.238 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.238 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.238 rmmod nvme_tcp 00:16:27.238 rmmod nvme_fabrics 00:16:27.498 rmmod nvme_keyring 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2610186 ']' 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2610186 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2610186 ']' 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2610186 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2610186 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2610186' 00:16:27.498 killing process with pid 2610186 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2610186 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2610186 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.498 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:30.042 00:16:30.042 real 0m13.331s 00:16:30.042 user 0m15.653s 00:16:30.042 sys 0m6.617s 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:30.042 ************************************ 00:16:30.042 END TEST nvmf_referrals 00:16:30.042 ************************************ 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.042 ************************************ 00:16:30.042 START TEST nvmf_connect_disconnect 00:16:30.042 ************************************ 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:30.042 * Looking for test storage... 00:16:30.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.042 --rc genhtml_branch_coverage=1 00:16:30.042 --rc genhtml_function_coverage=1 00:16:30.042 --rc genhtml_legend=1 00:16:30.042 --rc geninfo_all_blocks=1 00:16:30.042 --rc geninfo_unexecuted_blocks=1 00:16:30.042 00:16:30.042 ' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.042 --rc genhtml_branch_coverage=1 00:16:30.042 --rc genhtml_function_coverage=1 00:16:30.042 --rc genhtml_legend=1 00:16:30.042 --rc geninfo_all_blocks=1 00:16:30.042 --rc geninfo_unexecuted_blocks=1 00:16:30.042 00:16:30.042 ' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.042 --rc genhtml_branch_coverage=1 00:16:30.042 --rc genhtml_function_coverage=1 00:16:30.042 --rc genhtml_legend=1 00:16:30.042 --rc geninfo_all_blocks=1 00:16:30.042 --rc geninfo_unexecuted_blocks=1 00:16:30.042 00:16:30.042 ' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:30.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.042 --rc genhtml_branch_coverage=1 00:16:30.042 --rc genhtml_function_coverage=1 00:16:30.042 --rc genhtml_legend=1 00:16:30.042 --rc geninfo_all_blocks=1 00:16:30.042 --rc geninfo_unexecuted_blocks=1 00:16:30.042 00:16:30.042 ' 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.042 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:30.043 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:38.180 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:38.180 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:38.181 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:38.181 Found net devices under 0000:31:00.0: cvl_0_0 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:38.181 Found net devices under 0000:31:00.1: cvl_0_1 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:38.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:16:38.181 00:16:38.181 --- 10.0.0.2 ping statistics --- 00:16:38.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.181 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:38.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:16:38.181 00:16:38.181 --- 10.0.0.1 ping statistics --- 00:16:38.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.181 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2615165 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2615165 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2615165 ']' 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:38.181 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.181 [2024-11-20 06:27:57.520710] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:16:38.181 [2024-11-20 06:27:57.520786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.181 [2024-11-20 06:27:57.620763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.181 [2024-11-20 06:27:57.674498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.181 [2024-11-20 06:27:57.674549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.181 [2024-11-20 06:27:57.674558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.181 [2024-11-20 06:27:57.674565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.181 [2024-11-20 06:27:57.674573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.181 [2024-11-20 06:27:57.676718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.182 [2024-11-20 06:27:57.676879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.182 [2024-11-20 06:27:57.676931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.182 [2024-11-20 06:27:57.676931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.442 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:38.442 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:16:38.442 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:38.442 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.442 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.703 [2024-11-20 06:27:58.399110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:38.703 [2024-11-20 06:27:58.485137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:38.703 06:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:42.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.006 rmmod nvme_tcp 00:16:57.006 rmmod nvme_fabrics 00:16:57.006 rmmod nvme_keyring 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2615165 ']' 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2615165 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2615165 ']' 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2615165 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:57.006 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2615165 00:16:57.267 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:57.267 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:57.267 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2615165' 00:16:57.267 killing process with pid 2615165 00:16:57.267 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2615165 00:16:57.267 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2615165 00:16:57.267 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.267 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.267 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.267 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.268 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.821 00:16:59.821 real 0m29.591s 00:16:59.821 user 1m19.198s 00:16:59.821 sys 0m7.392s 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:59.821 ************************************ 00:16:59.821 END TEST nvmf_connect_disconnect 00:16:59.821 ************************************ 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.821 ************************************ 00:16:59.821 START TEST nvmf_multitarget 00:16:59.821 ************************************ 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:59.821 * Looking for test storage... 00:16:59.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:59.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.821 --rc genhtml_branch_coverage=1 00:16:59.821 --rc genhtml_function_coverage=1 00:16:59.821 --rc genhtml_legend=1 00:16:59.821 --rc geninfo_all_blocks=1 00:16:59.821 --rc geninfo_unexecuted_blocks=1 00:16:59.821 00:16:59.821 ' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:59.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.821 --rc genhtml_branch_coverage=1 00:16:59.821 --rc genhtml_function_coverage=1 00:16:59.821 --rc genhtml_legend=1 00:16:59.821 --rc geninfo_all_blocks=1 00:16:59.821 --rc geninfo_unexecuted_blocks=1 00:16:59.821 00:16:59.821 ' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:59.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.821 --rc genhtml_branch_coverage=1 00:16:59.821 --rc genhtml_function_coverage=1 00:16:59.821 --rc genhtml_legend=1 00:16:59.821 --rc geninfo_all_blocks=1 00:16:59.821 --rc geninfo_unexecuted_blocks=1 00:16:59.821 00:16:59.821 ' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:59.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.821 --rc genhtml_branch_coverage=1 00:16:59.821 --rc genhtml_function_coverage=1 00:16:59.821 --rc genhtml_legend=1 00:16:59.821 --rc geninfo_all_blocks=1 00:16:59.821 --rc geninfo_unexecuted_blocks=1 00:16:59.821 00:16:59.821 ' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.821 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.822 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:08.095 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:08.095 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:08.095 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:08.096 Found net devices under 0000:31:00.0: cvl_0_0 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:08.096 Found net devices under 0000:31:00.1: cvl_0_1 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:08.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:17:08.096 00:17:08.096 --- 10.0.0.2 ping statistics --- 00:17:08.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.096 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:17:08.096 00:17:08.096 --- 10.0.0.1 ping statistics --- 00:17:08.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.096 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:08.096 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2623150 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2623150 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2623150 ']' 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:08.096 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:08.096 [2024-11-20 06:28:27.102343] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:17:08.096 [2024-11-20 06:28:27.102407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.096 [2024-11-20 06:28:27.202368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.096 [2024-11-20 06:28:27.255312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.096 [2024-11-20 06:28:27.255363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.096 [2024-11-20 06:28:27.255376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.096 [2024-11-20 06:28:27.255383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.096 [2024-11-20 06:28:27.255390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.096 [2024-11-20 06:28:27.257800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.097 [2024-11-20 06:28:27.257960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.097 [2024-11-20 06:28:27.258116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.097 [2024-11-20 06:28:27.258117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:08.097 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:08.358 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:08.358 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:08.358 "nvmf_tgt_1" 00:17:08.358 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:08.629 "nvmf_tgt_2" 00:17:08.629 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:08.629 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:08.629 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:08.629 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:08.629 true 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:08.898 true 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.898 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:08.898 rmmod nvme_tcp 00:17:08.898 rmmod nvme_fabrics 00:17:09.160 rmmod nvme_keyring 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2623150 ']' 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2623150 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2623150 ']' 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2623150 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2623150 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2623150' 00:17:09.160 killing process with pid 2623150 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2623150 00:17:09.160 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2623150 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.423 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.341 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:11.341 00:17:11.341 real 0m11.968s 00:17:11.341 user 0m10.418s 00:17:11.341 sys 0m6.175s 00:17:11.341 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.341 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:11.341 ************************************ 00:17:11.341 END TEST nvmf_multitarget 00:17:11.341 ************************************ 00:17:11.341 06:28:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:11.341 06:28:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:11.341 06:28:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.341 06:28:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.602 ************************************ 00:17:11.602 START TEST nvmf_rpc 00:17:11.602 ************************************ 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:11.602 * Looking for test storage... 00:17:11.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:11.602 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.603 --rc genhtml_branch_coverage=1 00:17:11.603 --rc genhtml_function_coverage=1 00:17:11.603 --rc genhtml_legend=1 00:17:11.603 --rc geninfo_all_blocks=1 00:17:11.603 --rc geninfo_unexecuted_blocks=1 00:17:11.603 00:17:11.603 ' 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.603 --rc genhtml_branch_coverage=1 00:17:11.603 --rc genhtml_function_coverage=1 00:17:11.603 --rc genhtml_legend=1 00:17:11.603 --rc geninfo_all_blocks=1 00:17:11.603 --rc geninfo_unexecuted_blocks=1 00:17:11.603 00:17:11.603 ' 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.603 --rc genhtml_branch_coverage=1 00:17:11.603 --rc genhtml_function_coverage=1 00:17:11.603 --rc genhtml_legend=1 00:17:11.603 --rc geninfo_all_blocks=1 00:17:11.603 --rc geninfo_unexecuted_blocks=1 00:17:11.603 00:17:11.603 ' 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:11.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:11.603 --rc genhtml_branch_coverage=1 00:17:11.603 --rc genhtml_function_coverage=1 00:17:11.603 --rc genhtml_legend=1 00:17:11.603 --rc geninfo_all_blocks=1 00:17:11.603 --rc geninfo_unexecuted_blocks=1 00:17:11.603 00:17:11.603 ' 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:11.603 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:11.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:11.604 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.741 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:19.741 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:19.742 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:19.742 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:19.742 Found net devices under 0000:31:00.0: cvl_0_0 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:19.742 Found net devices under 0000:31:00.1: cvl_0_1 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:19.742 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:19.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:17:19.742 00:17:19.742 --- 10.0.0.2 ping statistics --- 00:17:19.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.742 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:17:19.742 00:17:19.742 --- 10.0.0.1 ping statistics --- 00:17:19.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.742 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.742 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2627882 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2627882 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2627882 ']' 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:19.743 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.743 [2024-11-20 06:28:39.212512] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:17:19.743 [2024-11-20 06:28:39.212578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.743 [2024-11-20 06:28:39.313717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.743 [2024-11-20 06:28:39.366282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.743 [2024-11-20 06:28:39.366333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.743 [2024-11-20 06:28:39.366342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.743 [2024-11-20 06:28:39.366350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.743 [2024-11-20 06:28:39.366356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.743 [2024-11-20 06:28:39.368447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.743 [2024-11-20 06:28:39.368610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.743 [2024-11-20 06:28:39.368786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.743 [2024-11-20 06:28:39.368848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:20.316 "tick_rate": 2400000000, 00:17:20.316 "poll_groups": [ 00:17:20.316 { 00:17:20.316 "name": "nvmf_tgt_poll_group_000", 00:17:20.316 "admin_qpairs": 0, 00:17:20.316 "io_qpairs": 0, 00:17:20.316 "current_admin_qpairs": 0, 00:17:20.316 "current_io_qpairs": 0, 00:17:20.316 "pending_bdev_io": 0, 00:17:20.316 "completed_nvme_io": 0, 00:17:20.316 "transports": [] 00:17:20.316 }, 00:17:20.316 { 00:17:20.316 "name": "nvmf_tgt_poll_group_001", 00:17:20.316 "admin_qpairs": 0, 00:17:20.316 "io_qpairs": 0, 00:17:20.316 "current_admin_qpairs": 0, 00:17:20.316 "current_io_qpairs": 0, 00:17:20.316 "pending_bdev_io": 0, 00:17:20.316 "completed_nvme_io": 0, 00:17:20.316 "transports": [] 00:17:20.316 }, 00:17:20.316 { 00:17:20.316 "name": "nvmf_tgt_poll_group_002", 00:17:20.316 "admin_qpairs": 0, 00:17:20.316 "io_qpairs": 0, 00:17:20.316 "current_admin_qpairs": 0, 00:17:20.316 "current_io_qpairs": 0, 00:17:20.316 "pending_bdev_io": 0, 00:17:20.316 "completed_nvme_io": 0, 00:17:20.316 "transports": [] 00:17:20.316 }, 00:17:20.316 { 00:17:20.316 "name": "nvmf_tgt_poll_group_003", 00:17:20.316 "admin_qpairs": 0, 00:17:20.316 "io_qpairs": 0, 00:17:20.316 "current_admin_qpairs": 0, 00:17:20.316 "current_io_qpairs": 0, 00:17:20.316 "pending_bdev_io": 0, 00:17:20.316 "completed_nvme_io": 0, 00:17:20.316 "transports": [] 00:17:20.316 } 00:17:20.316 ] 00:17:20.316 }' 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.316 [2024-11-20 06:28:40.222491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.316 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:20.578 "tick_rate": 2400000000, 00:17:20.578 "poll_groups": [ 00:17:20.578 { 00:17:20.578 "name": "nvmf_tgt_poll_group_000", 00:17:20.578 "admin_qpairs": 0, 00:17:20.578 "io_qpairs": 0, 00:17:20.578 "current_admin_qpairs": 0, 00:17:20.578 "current_io_qpairs": 0, 00:17:20.578 "pending_bdev_io": 0, 00:17:20.578 "completed_nvme_io": 0, 00:17:20.578 "transports": [ 00:17:20.578 { 00:17:20.578 "trtype": "TCP" 00:17:20.578 } 00:17:20.578 ] 00:17:20.578 }, 00:17:20.578 { 00:17:20.578 "name": "nvmf_tgt_poll_group_001", 00:17:20.578 "admin_qpairs": 0, 00:17:20.578 "io_qpairs": 0, 00:17:20.578 "current_admin_qpairs": 0, 00:17:20.578 "current_io_qpairs": 0, 00:17:20.578 "pending_bdev_io": 0, 00:17:20.578 "completed_nvme_io": 0, 00:17:20.578 "transports": [ 00:17:20.578 { 00:17:20.578 "trtype": "TCP" 00:17:20.578 } 00:17:20.578 ] 00:17:20.578 }, 00:17:20.578 { 00:17:20.578 "name": "nvmf_tgt_poll_group_002", 00:17:20.578 "admin_qpairs": 0, 00:17:20.578 "io_qpairs": 0, 00:17:20.578 "current_admin_qpairs": 0, 00:17:20.578 "current_io_qpairs": 0, 00:17:20.578 "pending_bdev_io": 0, 00:17:20.578 "completed_nvme_io": 0, 00:17:20.578 "transports": [ 00:17:20.578 { 00:17:20.578 "trtype": "TCP" 00:17:20.578 } 00:17:20.578 ] 00:17:20.578 }, 00:17:20.578 { 00:17:20.578 "name": "nvmf_tgt_poll_group_003", 00:17:20.578 "admin_qpairs": 0, 00:17:20.578 "io_qpairs": 0, 00:17:20.578 "current_admin_qpairs": 0, 00:17:20.578 "current_io_qpairs": 0, 00:17:20.578 "pending_bdev_io": 0, 00:17:20.578 "completed_nvme_io": 0, 00:17:20.578 "transports": [ 00:17:20.578 { 00:17:20.578 "trtype": "TCP" 00:17:20.578 } 00:17:20.578 ] 00:17:20.578 } 00:17:20.578 ] 00:17:20.578 }' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 Malloc1 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.578 [2024-11-20 06:28:40.434873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.578 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:20.579 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:17:20.579 [2024-11-20 06:28:40.471856] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:17:20.840 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:20.840 could not add new controller: failed to write to nvme-fabrics device 00:17:20.840 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:20.840 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.841 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.841 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.841 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:20.841 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.841 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.841 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.841 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:22.227 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:22.227 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:22.228 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.228 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:22.228 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:24.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:17:24.772 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.772 [2024-11-20 06:28:44.226021] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:17:24.773 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:24.773 could not add new controller: failed to write to nvme-fabrics device 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.773 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:26.158 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:26.158 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:26.158 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:26.158 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:26.158 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.073 [2024-11-20 06:28:47.982725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.073 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:28.333 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.333 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.333 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.333 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:28.333 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.333 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.333 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.333 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.717 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.717 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:29.717 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.717 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:29.717 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:31.630 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:31.630 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:31.630 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.630 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:31.630 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.630 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:31.630 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.891 [2024-11-20 06:28:51.694231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.891 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:33.807 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:33.807 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:33.807 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.807 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:33.807 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.721 [2024-11-20 06:28:55.415930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.721 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.105 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:37.105 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:37.105 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.105 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:37.105 06:28:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.674 [2024-11-20 06:28:59.168837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.674 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:41.061 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:41.061 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:41.061 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.061 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:41.061 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.972 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 [2024-11-20 06:29:02.913440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.233 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.615 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:44.615 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:17:44.615 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.615 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:17:44.615 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 [2024-11-20 06:29:06.660105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 [2024-11-20 06:29:06.724270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 [2024-11-20 06:29:06.792469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 [2024-11-20 06:29:06.864681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 [2024-11-20 06:29:06.928895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.157 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:47.157 "tick_rate": 2400000000, 00:17:47.157 "poll_groups": [ 00:17:47.157 { 00:17:47.157 "name": "nvmf_tgt_poll_group_000", 00:17:47.157 "admin_qpairs": 0, 00:17:47.157 "io_qpairs": 224, 00:17:47.157 "current_admin_qpairs": 0, 00:17:47.157 "current_io_qpairs": 0, 00:17:47.157 "pending_bdev_io": 0, 00:17:47.157 "completed_nvme_io": 325, 00:17:47.157 "transports": [ 00:17:47.157 { 00:17:47.157 "trtype": "TCP" 00:17:47.157 } 00:17:47.157 ] 00:17:47.157 }, 00:17:47.157 { 00:17:47.157 "name": "nvmf_tgt_poll_group_001", 00:17:47.157 "admin_qpairs": 1, 00:17:47.157 "io_qpairs": 223, 00:17:47.157 "current_admin_qpairs": 0, 00:17:47.157 "current_io_qpairs": 0, 00:17:47.157 "pending_bdev_io": 0, 00:17:47.157 "completed_nvme_io": 224, 00:17:47.157 "transports": [ 00:17:47.157 { 00:17:47.157 "trtype": "TCP" 00:17:47.157 } 00:17:47.157 ] 00:17:47.157 }, 00:17:47.157 { 00:17:47.157 "name": "nvmf_tgt_poll_group_002", 00:17:47.157 "admin_qpairs": 6, 00:17:47.157 "io_qpairs": 218, 00:17:47.157 "current_admin_qpairs": 0, 00:17:47.157 "current_io_qpairs": 0, 00:17:47.157 "pending_bdev_io": 0, 00:17:47.157 "completed_nvme_io": 463, 00:17:47.157 "transports": [ 00:17:47.157 { 00:17:47.157 "trtype": "TCP" 00:17:47.157 } 00:17:47.157 ] 00:17:47.157 }, 00:17:47.157 { 00:17:47.157 "name": "nvmf_tgt_poll_group_003", 00:17:47.157 "admin_qpairs": 0, 00:17:47.157 "io_qpairs": 224, 00:17:47.157 "current_admin_qpairs": 0, 00:17:47.157 "current_io_qpairs": 0, 00:17:47.157 "pending_bdev_io": 0, 00:17:47.157 "completed_nvme_io": 227, 00:17:47.157 "transports": [ 00:17:47.157 { 00:17:47.157 "trtype": "TCP" 00:17:47.157 } 00:17:47.157 ] 00:17:47.157 } 00:17:47.157 ] 00:17:47.157 }' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:47.157 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:47.418 rmmod nvme_tcp 00:17:47.418 rmmod nvme_fabrics 00:17:47.418 rmmod nvme_keyring 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2627882 ']' 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2627882 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2627882 ']' 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2627882 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2627882 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2627882' 00:17:47.418 killing process with pid 2627882 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2627882 00:17:47.418 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2627882 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.679 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.592 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:49.592 00:17:49.592 real 0m38.176s 00:17:49.592 user 1m53.912s 00:17:49.592 sys 0m8.013s 00:17:49.592 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.592 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 ************************************ 00:17:49.592 END TEST nvmf_rpc 00:17:49.592 ************************************ 00:17:49.592 06:29:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:49.592 06:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:49.592 06:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:49.592 06:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.853 ************************************ 00:17:49.853 START TEST nvmf_invalid 00:17:49.853 ************************************ 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:49.853 * Looking for test storage... 00:17:49.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.853 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:49.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.854 --rc genhtml_branch_coverage=1 00:17:49.854 --rc genhtml_function_coverage=1 00:17:49.854 --rc genhtml_legend=1 00:17:49.854 --rc geninfo_all_blocks=1 00:17:49.854 --rc geninfo_unexecuted_blocks=1 00:17:49.854 00:17:49.854 ' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:49.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.854 --rc genhtml_branch_coverage=1 00:17:49.854 --rc genhtml_function_coverage=1 00:17:49.854 --rc genhtml_legend=1 00:17:49.854 --rc geninfo_all_blocks=1 00:17:49.854 --rc geninfo_unexecuted_blocks=1 00:17:49.854 00:17:49.854 ' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:49.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.854 --rc genhtml_branch_coverage=1 00:17:49.854 --rc genhtml_function_coverage=1 00:17:49.854 --rc genhtml_legend=1 00:17:49.854 --rc geninfo_all_blocks=1 00:17:49.854 --rc geninfo_unexecuted_blocks=1 00:17:49.854 00:17:49.854 ' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:49.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.854 --rc genhtml_branch_coverage=1 00:17:49.854 --rc genhtml_function_coverage=1 00:17:49.854 --rc genhtml_legend=1 00:17:49.854 --rc geninfo_all_blocks=1 00:17:49.854 --rc geninfo_unexecuted_blocks=1 00:17:49.854 00:17:49.854 ' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:49.854 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:57.998 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:57.998 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.998 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.999 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:57.999 Found net devices under 0000:31:00.0: cvl_0_0 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:57.999 Found net devices under 0000:31:00.1: cvl_0_1 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:57.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:17:57.999 00:17:57.999 --- 10.0.0.2 ping statistics --- 00:17:57.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.999 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:17:57.999 00:17:57.999 --- 10.0.0.1 ping statistics --- 00:17:57.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.999 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2637770 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2637770 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2637770 ']' 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:57.999 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:57.999 [2024-11-20 06:29:17.437520] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:17:57.999 [2024-11-20 06:29:17.437586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.999 [2024-11-20 06:29:17.540698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.999 [2024-11-20 06:29:17.593326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.999 [2024-11-20 06:29:17.593381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.999 [2024-11-20 06:29:17.593390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.999 [2024-11-20 06:29:17.593397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.999 [2024-11-20 06:29:17.593403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.999 [2024-11-20 06:29:17.595521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.999 [2024-11-20 06:29:17.595678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.999 [2024-11-20 06:29:17.595837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.999 [2024-11-20 06:29:17.595837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:58.572 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13496 00:17:58.572 [2024-11-20 06:29:18.474567] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:58.834 { 00:17:58.834 "nqn": "nqn.2016-06.io.spdk:cnode13496", 00:17:58.834 "tgt_name": "foobar", 00:17:58.834 "method": "nvmf_create_subsystem", 00:17:58.834 "req_id": 1 00:17:58.834 } 00:17:58.834 Got JSON-RPC error response 00:17:58.834 response: 00:17:58.834 { 00:17:58.834 "code": -32603, 00:17:58.834 "message": "Unable to find target foobar" 00:17:58.834 }' 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:58.834 { 00:17:58.834 "nqn": "nqn.2016-06.io.spdk:cnode13496", 00:17:58.834 "tgt_name": "foobar", 00:17:58.834 "method": "nvmf_create_subsystem", 00:17:58.834 "req_id": 1 00:17:58.834 } 00:17:58.834 Got JSON-RPC error response 00:17:58.834 response: 00:17:58.834 { 00:17:58.834 "code": -32603, 00:17:58.834 "message": "Unable to find target foobar" 00:17:58.834 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29072 00:17:58.834 [2024-11-20 06:29:18.683394] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29072: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:58.834 { 00:17:58.834 "nqn": "nqn.2016-06.io.spdk:cnode29072", 00:17:58.834 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:58.834 "method": "nvmf_create_subsystem", 00:17:58.834 "req_id": 1 00:17:58.834 } 00:17:58.834 Got JSON-RPC error response 00:17:58.834 response: 00:17:58.834 { 00:17:58.834 "code": -32602, 00:17:58.834 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:58.834 }' 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:58.834 { 00:17:58.834 "nqn": "nqn.2016-06.io.spdk:cnode29072", 00:17:58.834 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:58.834 "method": "nvmf_create_subsystem", 00:17:58.834 "req_id": 1 00:17:58.834 } 00:17:58.834 Got JSON-RPC error response 00:17:58.834 response: 00:17:58.834 { 00:17:58.834 "code": -32602, 00:17:58.834 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:58.834 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:58.834 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23415 00:17:59.097 [2024-11-20 06:29:18.892108] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23415: invalid model number 'SPDK_Controller' 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:59.097 { 00:17:59.097 "nqn": "nqn.2016-06.io.spdk:cnode23415", 00:17:59.097 "model_number": "SPDK_Controller\u001f", 00:17:59.097 "method": "nvmf_create_subsystem", 00:17:59.097 "req_id": 1 00:17:59.097 } 00:17:59.097 Got JSON-RPC error response 00:17:59.097 response: 00:17:59.097 { 00:17:59.097 "code": -32602, 00:17:59.097 "message": "Invalid MN SPDK_Controller\u001f" 00:17:59.097 }' 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:59.097 { 00:17:59.097 "nqn": "nqn.2016-06.io.spdk:cnode23415", 00:17:59.097 "model_number": "SPDK_Controller\u001f", 00:17:59.097 "method": "nvmf_create_subsystem", 00:17:59.097 "req_id": 1 00:17:59.097 } 00:17:59.097 Got JSON-RPC error response 00:17:59.097 response: 00:17:59.097 { 00:17:59.097 "code": -32602, 00:17:59.097 "message": "Invalid MN SPDK_Controller\u001f" 00:17:59.097 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:59.097 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.098 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Vj`Ld9O5al$5!G5kO9x+B' 00:17:59.360 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Vj`Ld9O5al$5!G5kO9x+B' nqn.2016-06.io.spdk:cnode31039 00:17:59.360 [2024-11-20 06:29:19.269564] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31039: invalid serial number 'Vj`Ld9O5al$5!G5kO9x+B' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:59.624 { 00:17:59.624 "nqn": "nqn.2016-06.io.spdk:cnode31039", 00:17:59.624 "serial_number": "Vj`Ld9O5al$5!G5kO9x+B", 00:17:59.624 "method": "nvmf_create_subsystem", 00:17:59.624 "req_id": 1 00:17:59.624 } 00:17:59.624 Got JSON-RPC error response 00:17:59.624 response: 00:17:59.624 { 00:17:59.624 "code": -32602, 00:17:59.624 "message": "Invalid SN Vj`Ld9O5al$5!G5kO9x+B" 00:17:59.624 }' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:59.624 { 00:17:59.624 "nqn": "nqn.2016-06.io.spdk:cnode31039", 00:17:59.624 "serial_number": "Vj`Ld9O5al$5!G5kO9x+B", 00:17:59.624 "method": "nvmf_create_subsystem", 00:17:59.624 "req_id": 1 00:17:59.624 } 00:17:59.624 Got JSON-RPC error response 00:17:59.624 response: 00:17:59.624 { 00:17:59.624 "code": -32602, 00:17:59.624 "message": "Invalid SN Vj`Ld9O5al$5!G5kO9x+B" 00:17:59.624 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:59.624 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.625 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:17:59.887 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Y|z-qzC#$3rbT@z;o?H8?'\''~3Ek]/BiH9j_ /dev/null' 00:18:01.979 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:04.038 00:18:04.038 real 0m14.253s 00:18:04.038 user 0m21.149s 00:18:04.038 sys 0m6.710s 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:04.038 ************************************ 00:18:04.038 END TEST nvmf_invalid 00:18:04.038 ************************************ 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.038 ************************************ 00:18:04.038 START TEST nvmf_connect_stress 00:18:04.038 ************************************ 00:18:04.038 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:04.300 * Looking for test storage... 00:18:04.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.301 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:04.301 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:18:04.301 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:04.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.301 --rc genhtml_branch_coverage=1 00:18:04.301 --rc genhtml_function_coverage=1 00:18:04.301 --rc genhtml_legend=1 00:18:04.301 --rc geninfo_all_blocks=1 00:18:04.301 --rc geninfo_unexecuted_blocks=1 00:18:04.301 00:18:04.301 ' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:04.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.301 --rc genhtml_branch_coverage=1 00:18:04.301 --rc genhtml_function_coverage=1 00:18:04.301 --rc genhtml_legend=1 00:18:04.301 --rc geninfo_all_blocks=1 00:18:04.301 --rc geninfo_unexecuted_blocks=1 00:18:04.301 00:18:04.301 ' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:04.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.301 --rc genhtml_branch_coverage=1 00:18:04.301 --rc genhtml_function_coverage=1 00:18:04.301 --rc genhtml_legend=1 00:18:04.301 --rc geninfo_all_blocks=1 00:18:04.301 --rc geninfo_unexecuted_blocks=1 00:18:04.301 00:18:04.301 ' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:04.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.301 --rc genhtml_branch_coverage=1 00:18:04.301 --rc genhtml_function_coverage=1 00:18:04.301 --rc genhtml_legend=1 00:18:04.301 --rc geninfo_all_blocks=1 00:18:04.301 --rc geninfo_unexecuted_blocks=1 00:18:04.301 00:18:04.301 ' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.301 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:04.302 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:12.448 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:12.448 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:12.448 Found net devices under 0000:31:00.0: cvl_0_0 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:12.448 Found net devices under 0000:31:00.1: cvl_0_1 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.448 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:12.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:18:12.449 00:18:12.449 --- 10.0.0.2 ping statistics --- 00:18:12.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.449 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:18:12.449 00:18:12.449 --- 10.0.0.1 ping statistics --- 00:18:12.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.449 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2642992 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2642992 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2642992 ']' 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.449 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.449 [2024-11-20 06:29:31.785803] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:18:12.449 [2024-11-20 06:29:31.785868] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.449 [2024-11-20 06:29:31.886732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:12.449 [2024-11-20 06:29:31.937939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.449 [2024-11-20 06:29:31.937991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.449 [2024-11-20 06:29:31.937999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.449 [2024-11-20 06:29:31.938006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.449 [2024-11-20 06:29:31.938013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.449 [2024-11-20 06:29:31.940139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.449 [2024-11-20 06:29:31.940299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.449 [2024-11-20 06:29:31.940298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.711 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.711 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:18:12.711 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.711 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:12.711 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.973 [2024-11-20 06:29:32.661094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.973 [2024-11-20 06:29:32.686738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.973 NULL1 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2643113 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.973 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.234 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.234 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:13.234 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.234 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.234 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.807 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.807 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:13.807 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.807 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.807 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.068 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.068 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:14.068 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.068 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.068 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.328 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.328 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:14.328 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.328 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.328 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.589 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.589 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:14.589 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.589 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.589 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.161 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.161 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:15.161 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.161 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.161 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.423 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.423 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:15.423 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.423 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.423 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.683 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.683 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:15.683 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.683 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.683 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.944 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.944 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:15.944 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.944 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.944 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.205 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.205 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:16.205 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.205 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.205 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.778 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.778 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:16.778 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.778 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.778 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.038 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.038 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:17.039 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.039 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.039 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.299 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.299 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:17.299 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.299 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.299 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.561 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.561 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:17.561 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.561 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.561 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.821 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.821 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:17.821 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.821 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.821 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.393 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.393 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:18.393 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.393 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.393 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.653 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.653 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:18.653 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.653 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.653 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.914 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.914 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:18.914 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.914 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.914 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.174 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.174 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:19.174 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.174 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.174 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:19.435 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.435 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:19.435 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:19.435 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.435 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.007 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.007 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:20.007 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.007 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.007 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.267 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.267 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:20.267 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.267 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.267 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.528 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.528 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:20.528 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.528 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.528 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:20.789 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.789 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:20.789 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:20.789 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.789 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.360 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.360 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:21.360 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.360 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.360 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.622 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.622 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:21.622 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.622 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.622 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.882 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.883 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:21.883 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:21.883 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.883 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.145 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.145 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:22.145 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.145 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.145 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.406 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.406 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:22.406 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.406 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.406 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.978 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.978 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:22.978 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.978 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.978 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.978 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2643113 00:18:23.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2643113) - No such process 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2643113 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.239 rmmod nvme_tcp 00:18:23.239 rmmod nvme_fabrics 00:18:23.239 rmmod nvme_keyring 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2642992 ']' 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2642992 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2642992 ']' 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2642992 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:23.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2642992 00:18:23.239 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:23.239 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:23.239 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2642992' 00:18:23.239 killing process with pid 2642992 00:18:23.239 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2642992 00:18:23.239 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2642992 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.501 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:25.414 00:18:25.414 real 0m21.379s 00:18:25.414 user 0m42.258s 00:18:25.414 sys 0m9.354s 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.414 ************************************ 00:18:25.414 END TEST nvmf_connect_stress 00:18:25.414 ************************************ 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.414 ************************************ 00:18:25.414 START TEST nvmf_fused_ordering 00:18:25.414 ************************************ 00:18:25.414 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:25.676 * Looking for test storage... 00:18:25.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:25.676 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:25.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.677 --rc genhtml_branch_coverage=1 00:18:25.677 --rc genhtml_function_coverage=1 00:18:25.677 --rc genhtml_legend=1 00:18:25.677 --rc geninfo_all_blocks=1 00:18:25.677 --rc geninfo_unexecuted_blocks=1 00:18:25.677 00:18:25.677 ' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:25.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.677 --rc genhtml_branch_coverage=1 00:18:25.677 --rc genhtml_function_coverage=1 00:18:25.677 --rc genhtml_legend=1 00:18:25.677 --rc geninfo_all_blocks=1 00:18:25.677 --rc geninfo_unexecuted_blocks=1 00:18:25.677 00:18:25.677 ' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:25.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.677 --rc genhtml_branch_coverage=1 00:18:25.677 --rc genhtml_function_coverage=1 00:18:25.677 --rc genhtml_legend=1 00:18:25.677 --rc geninfo_all_blocks=1 00:18:25.677 --rc geninfo_unexecuted_blocks=1 00:18:25.677 00:18:25.677 ' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:25.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.677 --rc genhtml_branch_coverage=1 00:18:25.677 --rc genhtml_function_coverage=1 00:18:25.677 --rc genhtml_legend=1 00:18:25.677 --rc geninfo_all_blocks=1 00:18:25.677 --rc geninfo_unexecuted_blocks=1 00:18:25.677 00:18:25.677 ' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:25.677 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:33.822 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.822 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:33.823 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:33.823 Found net devices under 0000:31:00.0: cvl_0_0 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:33.823 Found net devices under 0000:31:00.1: cvl_0_1 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:33.823 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:33.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:18:33.823 00:18:33.823 --- 10.0.0.2 ping statistics --- 00:18:33.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.823 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:18:33.823 00:18:33.823 --- 10.0.0.1 ping statistics --- 00:18:33.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.823 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2649415 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2649415 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2649415 ']' 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:33.823 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:33.823 [2024-11-20 06:29:53.230301] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:18:33.824 [2024-11-20 06:29:53.230366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.824 [2024-11-20 06:29:53.331231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.824 [2024-11-20 06:29:53.381687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.824 [2024-11-20 06:29:53.381741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.824 [2024-11-20 06:29:53.381760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.824 [2024-11-20 06:29:53.381767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.824 [2024-11-20 06:29:53.381774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.824 [2024-11-20 06:29:53.382611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 [2024-11-20 06:29:54.110484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 [2024-11-20 06:29:54.134806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 NULL1 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.396 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:34.396 [2024-11-20 06:29:54.206789] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:18:34.396 [2024-11-20 06:29:54.206861] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649709 ] 00:18:34.969 Attached to nqn.2016-06.io.spdk:cnode1 00:18:34.969 Namespace ID: 1 size: 1GB 00:18:34.969 fused_ordering(0) 00:18:34.969 fused_ordering(1) 00:18:34.969 fused_ordering(2) 00:18:34.969 fused_ordering(3) 00:18:34.969 fused_ordering(4) 00:18:34.969 fused_ordering(5) 00:18:34.969 fused_ordering(6) 00:18:34.969 fused_ordering(7) 00:18:34.969 fused_ordering(8) 00:18:34.969 fused_ordering(9) 00:18:34.969 fused_ordering(10) 00:18:34.969 fused_ordering(11) 00:18:34.969 fused_ordering(12) 00:18:34.969 fused_ordering(13) 00:18:34.969 fused_ordering(14) 00:18:34.969 fused_ordering(15) 00:18:34.969 fused_ordering(16) 00:18:34.969 fused_ordering(17) 00:18:34.969 fused_ordering(18) 00:18:34.969 fused_ordering(19) 00:18:34.969 fused_ordering(20) 00:18:34.969 fused_ordering(21) 00:18:34.969 fused_ordering(22) 00:18:34.969 fused_ordering(23) 00:18:34.969 fused_ordering(24) 00:18:34.969 fused_ordering(25) 00:18:34.969 fused_ordering(26) 00:18:34.969 fused_ordering(27) 00:18:34.969 fused_ordering(28) 00:18:34.969 fused_ordering(29) 00:18:34.969 fused_ordering(30) 00:18:34.969 fused_ordering(31) 00:18:34.969 fused_ordering(32) 00:18:34.969 fused_ordering(33) 00:18:34.969 fused_ordering(34) 00:18:34.969 fused_ordering(35) 00:18:34.969 fused_ordering(36) 00:18:34.969 fused_ordering(37) 00:18:34.969 fused_ordering(38) 00:18:34.969 fused_ordering(39) 00:18:34.969 fused_ordering(40) 00:18:34.969 fused_ordering(41) 00:18:34.969 fused_ordering(42) 00:18:34.969 fused_ordering(43) 00:18:34.969 fused_ordering(44) 00:18:34.969 fused_ordering(45) 00:18:34.969 fused_ordering(46) 00:18:34.969 fused_ordering(47) 00:18:34.969 fused_ordering(48) 00:18:34.969 fused_ordering(49) 00:18:34.969 fused_ordering(50) 00:18:34.969 fused_ordering(51) 00:18:34.969 fused_ordering(52) 00:18:34.969 fused_ordering(53) 00:18:34.969 fused_ordering(54) 00:18:34.969 fused_ordering(55) 00:18:34.969 fused_ordering(56) 00:18:34.969 fused_ordering(57) 00:18:34.969 fused_ordering(58) 00:18:34.969 fused_ordering(59) 00:18:34.969 fused_ordering(60) 00:18:34.969 fused_ordering(61) 00:18:34.969 fused_ordering(62) 00:18:34.969 fused_ordering(63) 00:18:34.969 fused_ordering(64) 00:18:34.969 fused_ordering(65) 00:18:34.969 fused_ordering(66) 00:18:34.969 fused_ordering(67) 00:18:34.969 fused_ordering(68) 00:18:34.969 fused_ordering(69) 00:18:34.969 fused_ordering(70) 00:18:34.969 fused_ordering(71) 00:18:34.969 fused_ordering(72) 00:18:34.969 fused_ordering(73) 00:18:34.969 fused_ordering(74) 00:18:34.969 fused_ordering(75) 00:18:34.969 fused_ordering(76) 00:18:34.969 fused_ordering(77) 00:18:34.969 fused_ordering(78) 00:18:34.969 fused_ordering(79) 00:18:34.969 fused_ordering(80) 00:18:34.969 fused_ordering(81) 00:18:34.969 fused_ordering(82) 00:18:34.969 fused_ordering(83) 00:18:34.969 fused_ordering(84) 00:18:34.969 fused_ordering(85) 00:18:34.969 fused_ordering(86) 00:18:34.969 fused_ordering(87) 00:18:34.969 fused_ordering(88) 00:18:34.969 fused_ordering(89) 00:18:34.969 fused_ordering(90) 00:18:34.969 fused_ordering(91) 00:18:34.969 fused_ordering(92) 00:18:34.969 fused_ordering(93) 00:18:34.969 fused_ordering(94) 00:18:34.969 fused_ordering(95) 00:18:34.969 fused_ordering(96) 00:18:34.969 fused_ordering(97) 00:18:34.969 fused_ordering(98) 00:18:34.969 fused_ordering(99) 00:18:34.969 fused_ordering(100) 00:18:34.969 fused_ordering(101) 00:18:34.969 fused_ordering(102) 00:18:34.969 fused_ordering(103) 00:18:34.969 fused_ordering(104) 00:18:34.969 fused_ordering(105) 00:18:34.969 fused_ordering(106) 00:18:34.969 fused_ordering(107) 00:18:34.969 fused_ordering(108) 00:18:34.969 fused_ordering(109) 00:18:34.969 fused_ordering(110) 00:18:34.969 fused_ordering(111) 00:18:34.969 fused_ordering(112) 00:18:34.969 fused_ordering(113) 00:18:34.969 fused_ordering(114) 00:18:34.969 fused_ordering(115) 00:18:34.969 fused_ordering(116) 00:18:34.969 fused_ordering(117) 00:18:34.969 fused_ordering(118) 00:18:34.969 fused_ordering(119) 00:18:34.969 fused_ordering(120) 00:18:34.969 fused_ordering(121) 00:18:34.969 fused_ordering(122) 00:18:34.969 fused_ordering(123) 00:18:34.969 fused_ordering(124) 00:18:34.969 fused_ordering(125) 00:18:34.969 fused_ordering(126) 00:18:34.969 fused_ordering(127) 00:18:34.969 fused_ordering(128) 00:18:34.969 fused_ordering(129) 00:18:34.969 fused_ordering(130) 00:18:34.969 fused_ordering(131) 00:18:34.969 fused_ordering(132) 00:18:34.969 fused_ordering(133) 00:18:34.969 fused_ordering(134) 00:18:34.969 fused_ordering(135) 00:18:34.969 fused_ordering(136) 00:18:34.969 fused_ordering(137) 00:18:34.969 fused_ordering(138) 00:18:34.969 fused_ordering(139) 00:18:34.969 fused_ordering(140) 00:18:34.969 fused_ordering(141) 00:18:34.969 fused_ordering(142) 00:18:34.969 fused_ordering(143) 00:18:34.969 fused_ordering(144) 00:18:34.969 fused_ordering(145) 00:18:34.969 fused_ordering(146) 00:18:34.969 fused_ordering(147) 00:18:34.969 fused_ordering(148) 00:18:34.969 fused_ordering(149) 00:18:34.969 fused_ordering(150) 00:18:34.969 fused_ordering(151) 00:18:34.969 fused_ordering(152) 00:18:34.969 fused_ordering(153) 00:18:34.969 fused_ordering(154) 00:18:34.969 fused_ordering(155) 00:18:34.969 fused_ordering(156) 00:18:34.969 fused_ordering(157) 00:18:34.969 fused_ordering(158) 00:18:34.969 fused_ordering(159) 00:18:34.969 fused_ordering(160) 00:18:34.969 fused_ordering(161) 00:18:34.969 fused_ordering(162) 00:18:34.969 fused_ordering(163) 00:18:34.969 fused_ordering(164) 00:18:34.969 fused_ordering(165) 00:18:34.969 fused_ordering(166) 00:18:34.969 fused_ordering(167) 00:18:34.969 fused_ordering(168) 00:18:34.969 fused_ordering(169) 00:18:34.969 fused_ordering(170) 00:18:34.969 fused_ordering(171) 00:18:34.969 fused_ordering(172) 00:18:34.969 fused_ordering(173) 00:18:34.969 fused_ordering(174) 00:18:34.969 fused_ordering(175) 00:18:34.969 fused_ordering(176) 00:18:34.969 fused_ordering(177) 00:18:34.969 fused_ordering(178) 00:18:34.969 fused_ordering(179) 00:18:34.969 fused_ordering(180) 00:18:34.969 fused_ordering(181) 00:18:34.969 fused_ordering(182) 00:18:34.969 fused_ordering(183) 00:18:34.969 fused_ordering(184) 00:18:34.969 fused_ordering(185) 00:18:34.969 fused_ordering(186) 00:18:34.969 fused_ordering(187) 00:18:34.969 fused_ordering(188) 00:18:34.969 fused_ordering(189) 00:18:34.969 fused_ordering(190) 00:18:34.969 fused_ordering(191) 00:18:34.969 fused_ordering(192) 00:18:34.969 fused_ordering(193) 00:18:34.969 fused_ordering(194) 00:18:34.969 fused_ordering(195) 00:18:34.969 fused_ordering(196) 00:18:34.969 fused_ordering(197) 00:18:34.969 fused_ordering(198) 00:18:34.969 fused_ordering(199) 00:18:34.969 fused_ordering(200) 00:18:34.969 fused_ordering(201) 00:18:34.969 fused_ordering(202) 00:18:34.969 fused_ordering(203) 00:18:34.969 fused_ordering(204) 00:18:34.969 fused_ordering(205) 00:18:35.230 fused_ordering(206) 00:18:35.230 fused_ordering(207) 00:18:35.230 fused_ordering(208) 00:18:35.230 fused_ordering(209) 00:18:35.230 fused_ordering(210) 00:18:35.230 fused_ordering(211) 00:18:35.230 fused_ordering(212) 00:18:35.230 fused_ordering(213) 00:18:35.230 fused_ordering(214) 00:18:35.230 fused_ordering(215) 00:18:35.230 fused_ordering(216) 00:18:35.230 fused_ordering(217) 00:18:35.230 fused_ordering(218) 00:18:35.230 fused_ordering(219) 00:18:35.230 fused_ordering(220) 00:18:35.230 fused_ordering(221) 00:18:35.230 fused_ordering(222) 00:18:35.230 fused_ordering(223) 00:18:35.230 fused_ordering(224) 00:18:35.230 fused_ordering(225) 00:18:35.230 fused_ordering(226) 00:18:35.230 fused_ordering(227) 00:18:35.230 fused_ordering(228) 00:18:35.230 fused_ordering(229) 00:18:35.230 fused_ordering(230) 00:18:35.230 fused_ordering(231) 00:18:35.230 fused_ordering(232) 00:18:35.230 fused_ordering(233) 00:18:35.230 fused_ordering(234) 00:18:35.230 fused_ordering(235) 00:18:35.230 fused_ordering(236) 00:18:35.230 fused_ordering(237) 00:18:35.230 fused_ordering(238) 00:18:35.230 fused_ordering(239) 00:18:35.230 fused_ordering(240) 00:18:35.230 fused_ordering(241) 00:18:35.230 fused_ordering(242) 00:18:35.230 fused_ordering(243) 00:18:35.230 fused_ordering(244) 00:18:35.230 fused_ordering(245) 00:18:35.230 fused_ordering(246) 00:18:35.230 fused_ordering(247) 00:18:35.230 fused_ordering(248) 00:18:35.230 fused_ordering(249) 00:18:35.230 fused_ordering(250) 00:18:35.230 fused_ordering(251) 00:18:35.230 fused_ordering(252) 00:18:35.230 fused_ordering(253) 00:18:35.230 fused_ordering(254) 00:18:35.230 fused_ordering(255) 00:18:35.230 fused_ordering(256) 00:18:35.230 fused_ordering(257) 00:18:35.230 fused_ordering(258) 00:18:35.230 fused_ordering(259) 00:18:35.230 fused_ordering(260) 00:18:35.230 fused_ordering(261) 00:18:35.230 fused_ordering(262) 00:18:35.230 fused_ordering(263) 00:18:35.230 fused_ordering(264) 00:18:35.230 fused_ordering(265) 00:18:35.230 fused_ordering(266) 00:18:35.230 fused_ordering(267) 00:18:35.230 fused_ordering(268) 00:18:35.230 fused_ordering(269) 00:18:35.230 fused_ordering(270) 00:18:35.230 fused_ordering(271) 00:18:35.230 fused_ordering(272) 00:18:35.230 fused_ordering(273) 00:18:35.230 fused_ordering(274) 00:18:35.230 fused_ordering(275) 00:18:35.230 fused_ordering(276) 00:18:35.230 fused_ordering(277) 00:18:35.230 fused_ordering(278) 00:18:35.230 fused_ordering(279) 00:18:35.230 fused_ordering(280) 00:18:35.230 fused_ordering(281) 00:18:35.230 fused_ordering(282) 00:18:35.230 fused_ordering(283) 00:18:35.230 fused_ordering(284) 00:18:35.230 fused_ordering(285) 00:18:35.230 fused_ordering(286) 00:18:35.230 fused_ordering(287) 00:18:35.230 fused_ordering(288) 00:18:35.230 fused_ordering(289) 00:18:35.230 fused_ordering(290) 00:18:35.230 fused_ordering(291) 00:18:35.230 fused_ordering(292) 00:18:35.230 fused_ordering(293) 00:18:35.230 fused_ordering(294) 00:18:35.230 fused_ordering(295) 00:18:35.230 fused_ordering(296) 00:18:35.230 fused_ordering(297) 00:18:35.230 fused_ordering(298) 00:18:35.230 fused_ordering(299) 00:18:35.230 fused_ordering(300) 00:18:35.230 fused_ordering(301) 00:18:35.230 fused_ordering(302) 00:18:35.230 fused_ordering(303) 00:18:35.230 fused_ordering(304) 00:18:35.230 fused_ordering(305) 00:18:35.230 fused_ordering(306) 00:18:35.230 fused_ordering(307) 00:18:35.230 fused_ordering(308) 00:18:35.230 fused_ordering(309) 00:18:35.230 fused_ordering(310) 00:18:35.230 fused_ordering(311) 00:18:35.230 fused_ordering(312) 00:18:35.230 fused_ordering(313) 00:18:35.230 fused_ordering(314) 00:18:35.230 fused_ordering(315) 00:18:35.230 fused_ordering(316) 00:18:35.230 fused_ordering(317) 00:18:35.230 fused_ordering(318) 00:18:35.230 fused_ordering(319) 00:18:35.230 fused_ordering(320) 00:18:35.231 fused_ordering(321) 00:18:35.231 fused_ordering(322) 00:18:35.231 fused_ordering(323) 00:18:35.231 fused_ordering(324) 00:18:35.231 fused_ordering(325) 00:18:35.231 fused_ordering(326) 00:18:35.231 fused_ordering(327) 00:18:35.231 fused_ordering(328) 00:18:35.231 fused_ordering(329) 00:18:35.231 fused_ordering(330) 00:18:35.231 fused_ordering(331) 00:18:35.231 fused_ordering(332) 00:18:35.231 fused_ordering(333) 00:18:35.231 fused_ordering(334) 00:18:35.231 fused_ordering(335) 00:18:35.231 fused_ordering(336) 00:18:35.231 fused_ordering(337) 00:18:35.231 fused_ordering(338) 00:18:35.231 fused_ordering(339) 00:18:35.231 fused_ordering(340) 00:18:35.231 fused_ordering(341) 00:18:35.231 fused_ordering(342) 00:18:35.231 fused_ordering(343) 00:18:35.231 fused_ordering(344) 00:18:35.231 fused_ordering(345) 00:18:35.231 fused_ordering(346) 00:18:35.231 fused_ordering(347) 00:18:35.231 fused_ordering(348) 00:18:35.231 fused_ordering(349) 00:18:35.231 fused_ordering(350) 00:18:35.231 fused_ordering(351) 00:18:35.231 fused_ordering(352) 00:18:35.231 fused_ordering(353) 00:18:35.231 fused_ordering(354) 00:18:35.231 fused_ordering(355) 00:18:35.231 fused_ordering(356) 00:18:35.231 fused_ordering(357) 00:18:35.231 fused_ordering(358) 00:18:35.231 fused_ordering(359) 00:18:35.231 fused_ordering(360) 00:18:35.231 fused_ordering(361) 00:18:35.231 fused_ordering(362) 00:18:35.231 fused_ordering(363) 00:18:35.231 fused_ordering(364) 00:18:35.231 fused_ordering(365) 00:18:35.231 fused_ordering(366) 00:18:35.231 fused_ordering(367) 00:18:35.231 fused_ordering(368) 00:18:35.231 fused_ordering(369) 00:18:35.231 fused_ordering(370) 00:18:35.231 fused_ordering(371) 00:18:35.231 fused_ordering(372) 00:18:35.231 fused_ordering(373) 00:18:35.231 fused_ordering(374) 00:18:35.231 fused_ordering(375) 00:18:35.231 fused_ordering(376) 00:18:35.231 fused_ordering(377) 00:18:35.231 fused_ordering(378) 00:18:35.231 fused_ordering(379) 00:18:35.231 fused_ordering(380) 00:18:35.231 fused_ordering(381) 00:18:35.231 fused_ordering(382) 00:18:35.231 fused_ordering(383) 00:18:35.231 fused_ordering(384) 00:18:35.231 fused_ordering(385) 00:18:35.231 fused_ordering(386) 00:18:35.231 fused_ordering(387) 00:18:35.231 fused_ordering(388) 00:18:35.231 fused_ordering(389) 00:18:35.231 fused_ordering(390) 00:18:35.231 fused_ordering(391) 00:18:35.231 fused_ordering(392) 00:18:35.231 fused_ordering(393) 00:18:35.231 fused_ordering(394) 00:18:35.231 fused_ordering(395) 00:18:35.231 fused_ordering(396) 00:18:35.231 fused_ordering(397) 00:18:35.231 fused_ordering(398) 00:18:35.231 fused_ordering(399) 00:18:35.231 fused_ordering(400) 00:18:35.231 fused_ordering(401) 00:18:35.231 fused_ordering(402) 00:18:35.231 fused_ordering(403) 00:18:35.231 fused_ordering(404) 00:18:35.231 fused_ordering(405) 00:18:35.231 fused_ordering(406) 00:18:35.231 fused_ordering(407) 00:18:35.231 fused_ordering(408) 00:18:35.231 fused_ordering(409) 00:18:35.231 fused_ordering(410) 00:18:35.803 fused_ordering(411) 00:18:35.803 fused_ordering(412) 00:18:35.803 fused_ordering(413) 00:18:35.803 fused_ordering(414) 00:18:35.803 fused_ordering(415) 00:18:35.803 fused_ordering(416) 00:18:35.803 fused_ordering(417) 00:18:35.803 fused_ordering(418) 00:18:35.803 fused_ordering(419) 00:18:35.803 fused_ordering(420) 00:18:35.803 fused_ordering(421) 00:18:35.803 fused_ordering(422) 00:18:35.803 fused_ordering(423) 00:18:35.803 fused_ordering(424) 00:18:35.803 fused_ordering(425) 00:18:35.803 fused_ordering(426) 00:18:35.803 fused_ordering(427) 00:18:35.803 fused_ordering(428) 00:18:35.803 fused_ordering(429) 00:18:35.803 fused_ordering(430) 00:18:35.803 fused_ordering(431) 00:18:35.803 fused_ordering(432) 00:18:35.803 fused_ordering(433) 00:18:35.803 fused_ordering(434) 00:18:35.803 fused_ordering(435) 00:18:35.803 fused_ordering(436) 00:18:35.803 fused_ordering(437) 00:18:35.803 fused_ordering(438) 00:18:35.803 fused_ordering(439) 00:18:35.803 fused_ordering(440) 00:18:35.803 fused_ordering(441) 00:18:35.803 fused_ordering(442) 00:18:35.803 fused_ordering(443) 00:18:35.803 fused_ordering(444) 00:18:35.803 fused_ordering(445) 00:18:35.803 fused_ordering(446) 00:18:35.803 fused_ordering(447) 00:18:35.803 fused_ordering(448) 00:18:35.803 fused_ordering(449) 00:18:35.803 fused_ordering(450) 00:18:35.803 fused_ordering(451) 00:18:35.803 fused_ordering(452) 00:18:35.803 fused_ordering(453) 00:18:35.803 fused_ordering(454) 00:18:35.803 fused_ordering(455) 00:18:35.803 fused_ordering(456) 00:18:35.803 fused_ordering(457) 00:18:35.803 fused_ordering(458) 00:18:35.803 fused_ordering(459) 00:18:35.803 fused_ordering(460) 00:18:35.803 fused_ordering(461) 00:18:35.803 fused_ordering(462) 00:18:35.803 fused_ordering(463) 00:18:35.803 fused_ordering(464) 00:18:35.803 fused_ordering(465) 00:18:35.803 fused_ordering(466) 00:18:35.803 fused_ordering(467) 00:18:35.803 fused_ordering(468) 00:18:35.803 fused_ordering(469) 00:18:35.803 fused_ordering(470) 00:18:35.803 fused_ordering(471) 00:18:35.803 fused_ordering(472) 00:18:35.803 fused_ordering(473) 00:18:35.803 fused_ordering(474) 00:18:35.803 fused_ordering(475) 00:18:35.803 fused_ordering(476) 00:18:35.803 fused_ordering(477) 00:18:35.803 fused_ordering(478) 00:18:35.803 fused_ordering(479) 00:18:35.803 fused_ordering(480) 00:18:35.803 fused_ordering(481) 00:18:35.803 fused_ordering(482) 00:18:35.803 fused_ordering(483) 00:18:35.803 fused_ordering(484) 00:18:35.803 fused_ordering(485) 00:18:35.803 fused_ordering(486) 00:18:35.803 fused_ordering(487) 00:18:35.803 fused_ordering(488) 00:18:35.804 fused_ordering(489) 00:18:35.804 fused_ordering(490) 00:18:35.804 fused_ordering(491) 00:18:35.804 fused_ordering(492) 00:18:35.804 fused_ordering(493) 00:18:35.804 fused_ordering(494) 00:18:35.804 fused_ordering(495) 00:18:35.804 fused_ordering(496) 00:18:35.804 fused_ordering(497) 00:18:35.804 fused_ordering(498) 00:18:35.804 fused_ordering(499) 00:18:35.804 fused_ordering(500) 00:18:35.804 fused_ordering(501) 00:18:35.804 fused_ordering(502) 00:18:35.804 fused_ordering(503) 00:18:35.804 fused_ordering(504) 00:18:35.804 fused_ordering(505) 00:18:35.804 fused_ordering(506) 00:18:35.804 fused_ordering(507) 00:18:35.804 fused_ordering(508) 00:18:35.804 fused_ordering(509) 00:18:35.804 fused_ordering(510) 00:18:35.804 fused_ordering(511) 00:18:35.804 fused_ordering(512) 00:18:35.804 fused_ordering(513) 00:18:35.804 fused_ordering(514) 00:18:35.804 fused_ordering(515) 00:18:35.804 fused_ordering(516) 00:18:35.804 fused_ordering(517) 00:18:35.804 fused_ordering(518) 00:18:35.804 fused_ordering(519) 00:18:35.804 fused_ordering(520) 00:18:35.804 fused_ordering(521) 00:18:35.804 fused_ordering(522) 00:18:35.804 fused_ordering(523) 00:18:35.804 fused_ordering(524) 00:18:35.804 fused_ordering(525) 00:18:35.804 fused_ordering(526) 00:18:35.804 fused_ordering(527) 00:18:35.804 fused_ordering(528) 00:18:35.804 fused_ordering(529) 00:18:35.804 fused_ordering(530) 00:18:35.804 fused_ordering(531) 00:18:35.804 fused_ordering(532) 00:18:35.804 fused_ordering(533) 00:18:35.804 fused_ordering(534) 00:18:35.804 fused_ordering(535) 00:18:35.804 fused_ordering(536) 00:18:35.804 fused_ordering(537) 00:18:35.804 fused_ordering(538) 00:18:35.804 fused_ordering(539) 00:18:35.804 fused_ordering(540) 00:18:35.804 fused_ordering(541) 00:18:35.804 fused_ordering(542) 00:18:35.804 fused_ordering(543) 00:18:35.804 fused_ordering(544) 00:18:35.804 fused_ordering(545) 00:18:35.804 fused_ordering(546) 00:18:35.804 fused_ordering(547) 00:18:35.804 fused_ordering(548) 00:18:35.804 fused_ordering(549) 00:18:35.804 fused_ordering(550) 00:18:35.804 fused_ordering(551) 00:18:35.804 fused_ordering(552) 00:18:35.804 fused_ordering(553) 00:18:35.804 fused_ordering(554) 00:18:35.804 fused_ordering(555) 00:18:35.804 fused_ordering(556) 00:18:35.804 fused_ordering(557) 00:18:35.804 fused_ordering(558) 00:18:35.804 fused_ordering(559) 00:18:35.804 fused_ordering(560) 00:18:35.804 fused_ordering(561) 00:18:35.804 fused_ordering(562) 00:18:35.804 fused_ordering(563) 00:18:35.804 fused_ordering(564) 00:18:35.804 fused_ordering(565) 00:18:35.804 fused_ordering(566) 00:18:35.804 fused_ordering(567) 00:18:35.804 fused_ordering(568) 00:18:35.804 fused_ordering(569) 00:18:35.804 fused_ordering(570) 00:18:35.804 fused_ordering(571) 00:18:35.804 fused_ordering(572) 00:18:35.804 fused_ordering(573) 00:18:35.804 fused_ordering(574) 00:18:35.804 fused_ordering(575) 00:18:35.804 fused_ordering(576) 00:18:35.804 fused_ordering(577) 00:18:35.804 fused_ordering(578) 00:18:35.804 fused_ordering(579) 00:18:35.804 fused_ordering(580) 00:18:35.804 fused_ordering(581) 00:18:35.804 fused_ordering(582) 00:18:35.804 fused_ordering(583) 00:18:35.804 fused_ordering(584) 00:18:35.804 fused_ordering(585) 00:18:35.804 fused_ordering(586) 00:18:35.804 fused_ordering(587) 00:18:35.804 fused_ordering(588) 00:18:35.804 fused_ordering(589) 00:18:35.804 fused_ordering(590) 00:18:35.804 fused_ordering(591) 00:18:35.804 fused_ordering(592) 00:18:35.804 fused_ordering(593) 00:18:35.804 fused_ordering(594) 00:18:35.804 fused_ordering(595) 00:18:35.804 fused_ordering(596) 00:18:35.804 fused_ordering(597) 00:18:35.804 fused_ordering(598) 00:18:35.804 fused_ordering(599) 00:18:35.804 fused_ordering(600) 00:18:35.804 fused_ordering(601) 00:18:35.804 fused_ordering(602) 00:18:35.804 fused_ordering(603) 00:18:35.804 fused_ordering(604) 00:18:35.804 fused_ordering(605) 00:18:35.804 fused_ordering(606) 00:18:35.804 fused_ordering(607) 00:18:35.804 fused_ordering(608) 00:18:35.804 fused_ordering(609) 00:18:35.804 fused_ordering(610) 00:18:35.804 fused_ordering(611) 00:18:35.804 fused_ordering(612) 00:18:35.804 fused_ordering(613) 00:18:35.804 fused_ordering(614) 00:18:35.804 fused_ordering(615) 00:18:36.375 fused_ordering(616) 00:18:36.375 fused_ordering(617) 00:18:36.375 fused_ordering(618) 00:18:36.375 fused_ordering(619) 00:18:36.375 fused_ordering(620) 00:18:36.375 fused_ordering(621) 00:18:36.375 fused_ordering(622) 00:18:36.375 fused_ordering(623) 00:18:36.375 fused_ordering(624) 00:18:36.375 fused_ordering(625) 00:18:36.375 fused_ordering(626) 00:18:36.375 fused_ordering(627) 00:18:36.375 fused_ordering(628) 00:18:36.375 fused_ordering(629) 00:18:36.375 fused_ordering(630) 00:18:36.375 fused_ordering(631) 00:18:36.375 fused_ordering(632) 00:18:36.375 fused_ordering(633) 00:18:36.375 fused_ordering(634) 00:18:36.375 fused_ordering(635) 00:18:36.375 fused_ordering(636) 00:18:36.375 fused_ordering(637) 00:18:36.375 fused_ordering(638) 00:18:36.375 fused_ordering(639) 00:18:36.375 fused_ordering(640) 00:18:36.375 fused_ordering(641) 00:18:36.375 fused_ordering(642) 00:18:36.375 fused_ordering(643) 00:18:36.375 fused_ordering(644) 00:18:36.375 fused_ordering(645) 00:18:36.375 fused_ordering(646) 00:18:36.375 fused_ordering(647) 00:18:36.375 fused_ordering(648) 00:18:36.375 fused_ordering(649) 00:18:36.375 fused_ordering(650) 00:18:36.375 fused_ordering(651) 00:18:36.375 fused_ordering(652) 00:18:36.375 fused_ordering(653) 00:18:36.375 fused_ordering(654) 00:18:36.375 fused_ordering(655) 00:18:36.375 fused_ordering(656) 00:18:36.375 fused_ordering(657) 00:18:36.375 fused_ordering(658) 00:18:36.375 fused_ordering(659) 00:18:36.375 fused_ordering(660) 00:18:36.375 fused_ordering(661) 00:18:36.375 fused_ordering(662) 00:18:36.375 fused_ordering(663) 00:18:36.375 fused_ordering(664) 00:18:36.375 fused_ordering(665) 00:18:36.375 fused_ordering(666) 00:18:36.375 fused_ordering(667) 00:18:36.375 fused_ordering(668) 00:18:36.375 fused_ordering(669) 00:18:36.375 fused_ordering(670) 00:18:36.375 fused_ordering(671) 00:18:36.375 fused_ordering(672) 00:18:36.375 fused_ordering(673) 00:18:36.375 fused_ordering(674) 00:18:36.375 fused_ordering(675) 00:18:36.375 fused_ordering(676) 00:18:36.375 fused_ordering(677) 00:18:36.375 fused_ordering(678) 00:18:36.375 fused_ordering(679) 00:18:36.375 fused_ordering(680) 00:18:36.375 fused_ordering(681) 00:18:36.375 fused_ordering(682) 00:18:36.375 fused_ordering(683) 00:18:36.375 fused_ordering(684) 00:18:36.375 fused_ordering(685) 00:18:36.375 fused_ordering(686) 00:18:36.375 fused_ordering(687) 00:18:36.375 fused_ordering(688) 00:18:36.375 fused_ordering(689) 00:18:36.375 fused_ordering(690) 00:18:36.375 fused_ordering(691) 00:18:36.375 fused_ordering(692) 00:18:36.375 fused_ordering(693) 00:18:36.375 fused_ordering(694) 00:18:36.375 fused_ordering(695) 00:18:36.375 fused_ordering(696) 00:18:36.375 fused_ordering(697) 00:18:36.375 fused_ordering(698) 00:18:36.375 fused_ordering(699) 00:18:36.375 fused_ordering(700) 00:18:36.375 fused_ordering(701) 00:18:36.375 fused_ordering(702) 00:18:36.375 fused_ordering(703) 00:18:36.375 fused_ordering(704) 00:18:36.375 fused_ordering(705) 00:18:36.375 fused_ordering(706) 00:18:36.375 fused_ordering(707) 00:18:36.375 fused_ordering(708) 00:18:36.375 fused_ordering(709) 00:18:36.375 fused_ordering(710) 00:18:36.375 fused_ordering(711) 00:18:36.375 fused_ordering(712) 00:18:36.375 fused_ordering(713) 00:18:36.375 fused_ordering(714) 00:18:36.375 fused_ordering(715) 00:18:36.375 fused_ordering(716) 00:18:36.375 fused_ordering(717) 00:18:36.375 fused_ordering(718) 00:18:36.375 fused_ordering(719) 00:18:36.375 fused_ordering(720) 00:18:36.375 fused_ordering(721) 00:18:36.375 fused_ordering(722) 00:18:36.375 fused_ordering(723) 00:18:36.375 fused_ordering(724) 00:18:36.375 fused_ordering(725) 00:18:36.375 fused_ordering(726) 00:18:36.375 fused_ordering(727) 00:18:36.375 fused_ordering(728) 00:18:36.376 fused_ordering(729) 00:18:36.376 fused_ordering(730) 00:18:36.376 fused_ordering(731) 00:18:36.376 fused_ordering(732) 00:18:36.376 fused_ordering(733) 00:18:36.376 fused_ordering(734) 00:18:36.376 fused_ordering(735) 00:18:36.376 fused_ordering(736) 00:18:36.376 fused_ordering(737) 00:18:36.376 fused_ordering(738) 00:18:36.376 fused_ordering(739) 00:18:36.376 fused_ordering(740) 00:18:36.376 fused_ordering(741) 00:18:36.376 fused_ordering(742) 00:18:36.376 fused_ordering(743) 00:18:36.376 fused_ordering(744) 00:18:36.376 fused_ordering(745) 00:18:36.376 fused_ordering(746) 00:18:36.376 fused_ordering(747) 00:18:36.376 fused_ordering(748) 00:18:36.376 fused_ordering(749) 00:18:36.376 fused_ordering(750) 00:18:36.376 fused_ordering(751) 00:18:36.376 fused_ordering(752) 00:18:36.376 fused_ordering(753) 00:18:36.376 fused_ordering(754) 00:18:36.376 fused_ordering(755) 00:18:36.376 fused_ordering(756) 00:18:36.376 fused_ordering(757) 00:18:36.376 fused_ordering(758) 00:18:36.376 fused_ordering(759) 00:18:36.376 fused_ordering(760) 00:18:36.376 fused_ordering(761) 00:18:36.376 fused_ordering(762) 00:18:36.376 fused_ordering(763) 00:18:36.376 fused_ordering(764) 00:18:36.376 fused_ordering(765) 00:18:36.376 fused_ordering(766) 00:18:36.376 fused_ordering(767) 00:18:36.376 fused_ordering(768) 00:18:36.376 fused_ordering(769) 00:18:36.376 fused_ordering(770) 00:18:36.376 fused_ordering(771) 00:18:36.376 fused_ordering(772) 00:18:36.376 fused_ordering(773) 00:18:36.376 fused_ordering(774) 00:18:36.376 fused_ordering(775) 00:18:36.376 fused_ordering(776) 00:18:36.376 fused_ordering(777) 00:18:36.376 fused_ordering(778) 00:18:36.376 fused_ordering(779) 00:18:36.376 fused_ordering(780) 00:18:36.376 fused_ordering(781) 00:18:36.376 fused_ordering(782) 00:18:36.376 fused_ordering(783) 00:18:36.376 fused_ordering(784) 00:18:36.376 fused_ordering(785) 00:18:36.376 fused_ordering(786) 00:18:36.376 fused_ordering(787) 00:18:36.376 fused_ordering(788) 00:18:36.376 fused_ordering(789) 00:18:36.376 fused_ordering(790) 00:18:36.376 fused_ordering(791) 00:18:36.376 fused_ordering(792) 00:18:36.376 fused_ordering(793) 00:18:36.376 fused_ordering(794) 00:18:36.376 fused_ordering(795) 00:18:36.376 fused_ordering(796) 00:18:36.376 fused_ordering(797) 00:18:36.376 fused_ordering(798) 00:18:36.376 fused_ordering(799) 00:18:36.376 fused_ordering(800) 00:18:36.376 fused_ordering(801) 00:18:36.376 fused_ordering(802) 00:18:36.376 fused_ordering(803) 00:18:36.376 fused_ordering(804) 00:18:36.376 fused_ordering(805) 00:18:36.376 fused_ordering(806) 00:18:36.376 fused_ordering(807) 00:18:36.376 fused_ordering(808) 00:18:36.376 fused_ordering(809) 00:18:36.376 fused_ordering(810) 00:18:36.376 fused_ordering(811) 00:18:36.376 fused_ordering(812) 00:18:36.376 fused_ordering(813) 00:18:36.376 fused_ordering(814) 00:18:36.376 fused_ordering(815) 00:18:36.376 fused_ordering(816) 00:18:36.376 fused_ordering(817) 00:18:36.376 fused_ordering(818) 00:18:36.376 fused_ordering(819) 00:18:36.376 fused_ordering(820) 00:18:36.946 fused_ordering(821) 00:18:36.946 fused_ordering(822) 00:18:36.946 fused_ordering(823) 00:18:36.946 fused_ordering(824) 00:18:36.946 fused_ordering(825) 00:18:36.946 fused_ordering(826) 00:18:36.946 fused_ordering(827) 00:18:36.946 fused_ordering(828) 00:18:36.946 fused_ordering(829) 00:18:36.946 fused_ordering(830) 00:18:36.946 fused_ordering(831) 00:18:36.946 fused_ordering(832) 00:18:36.946 fused_ordering(833) 00:18:36.946 fused_ordering(834) 00:18:36.946 fused_ordering(835) 00:18:36.946 fused_ordering(836) 00:18:36.946 fused_ordering(837) 00:18:36.946 fused_ordering(838) 00:18:36.946 fused_ordering(839) 00:18:36.946 fused_ordering(840) 00:18:36.946 fused_ordering(841) 00:18:36.946 fused_ordering(842) 00:18:36.946 fused_ordering(843) 00:18:36.946 fused_ordering(844) 00:18:36.946 fused_ordering(845) 00:18:36.946 fused_ordering(846) 00:18:36.946 fused_ordering(847) 00:18:36.946 fused_ordering(848) 00:18:36.946 fused_ordering(849) 00:18:36.946 fused_ordering(850) 00:18:36.946 fused_ordering(851) 00:18:36.946 fused_ordering(852) 00:18:36.946 fused_ordering(853) 00:18:36.946 fused_ordering(854) 00:18:36.946 fused_ordering(855) 00:18:36.946 fused_ordering(856) 00:18:36.946 fused_ordering(857) 00:18:36.946 fused_ordering(858) 00:18:36.946 fused_ordering(859) 00:18:36.946 fused_ordering(860) 00:18:36.946 fused_ordering(861) 00:18:36.946 fused_ordering(862) 00:18:36.946 fused_ordering(863) 00:18:36.946 fused_ordering(864) 00:18:36.946 fused_ordering(865) 00:18:36.946 fused_ordering(866) 00:18:36.946 fused_ordering(867) 00:18:36.946 fused_ordering(868) 00:18:36.946 fused_ordering(869) 00:18:36.946 fused_ordering(870) 00:18:36.946 fused_ordering(871) 00:18:36.946 fused_ordering(872) 00:18:36.946 fused_ordering(873) 00:18:36.946 fused_ordering(874) 00:18:36.946 fused_ordering(875) 00:18:36.946 fused_ordering(876) 00:18:36.946 fused_ordering(877) 00:18:36.946 fused_ordering(878) 00:18:36.946 fused_ordering(879) 00:18:36.946 fused_ordering(880) 00:18:36.946 fused_ordering(881) 00:18:36.946 fused_ordering(882) 00:18:36.946 fused_ordering(883) 00:18:36.946 fused_ordering(884) 00:18:36.946 fused_ordering(885) 00:18:36.946 fused_ordering(886) 00:18:36.946 fused_ordering(887) 00:18:36.946 fused_ordering(888) 00:18:36.946 fused_ordering(889) 00:18:36.946 fused_ordering(890) 00:18:36.946 fused_ordering(891) 00:18:36.946 fused_ordering(892) 00:18:36.946 fused_ordering(893) 00:18:36.946 fused_ordering(894) 00:18:36.946 fused_ordering(895) 00:18:36.946 fused_ordering(896) 00:18:36.946 fused_ordering(897) 00:18:36.946 fused_ordering(898) 00:18:36.946 fused_ordering(899) 00:18:36.946 fused_ordering(900) 00:18:36.946 fused_ordering(901) 00:18:36.946 fused_ordering(902) 00:18:36.946 fused_ordering(903) 00:18:36.946 fused_ordering(904) 00:18:36.946 fused_ordering(905) 00:18:36.946 fused_ordering(906) 00:18:36.946 fused_ordering(907) 00:18:36.946 fused_ordering(908) 00:18:36.946 fused_ordering(909) 00:18:36.946 fused_ordering(910) 00:18:36.946 fused_ordering(911) 00:18:36.946 fused_ordering(912) 00:18:36.946 fused_ordering(913) 00:18:36.946 fused_ordering(914) 00:18:36.946 fused_ordering(915) 00:18:36.946 fused_ordering(916) 00:18:36.946 fused_ordering(917) 00:18:36.946 fused_ordering(918) 00:18:36.946 fused_ordering(919) 00:18:36.946 fused_ordering(920) 00:18:36.946 fused_ordering(921) 00:18:36.946 fused_ordering(922) 00:18:36.946 fused_ordering(923) 00:18:36.946 fused_ordering(924) 00:18:36.946 fused_ordering(925) 00:18:36.946 fused_ordering(926) 00:18:36.946 fused_ordering(927) 00:18:36.946 fused_ordering(928) 00:18:36.946 fused_ordering(929) 00:18:36.946 fused_ordering(930) 00:18:36.946 fused_ordering(931) 00:18:36.946 fused_ordering(932) 00:18:36.946 fused_ordering(933) 00:18:36.946 fused_ordering(934) 00:18:36.946 fused_ordering(935) 00:18:36.946 fused_ordering(936) 00:18:36.946 fused_ordering(937) 00:18:36.946 fused_ordering(938) 00:18:36.946 fused_ordering(939) 00:18:36.946 fused_ordering(940) 00:18:36.946 fused_ordering(941) 00:18:36.946 fused_ordering(942) 00:18:36.946 fused_ordering(943) 00:18:36.946 fused_ordering(944) 00:18:36.946 fused_ordering(945) 00:18:36.946 fused_ordering(946) 00:18:36.946 fused_ordering(947) 00:18:36.946 fused_ordering(948) 00:18:36.946 fused_ordering(949) 00:18:36.946 fused_ordering(950) 00:18:36.946 fused_ordering(951) 00:18:36.946 fused_ordering(952) 00:18:36.946 fused_ordering(953) 00:18:36.946 fused_ordering(954) 00:18:36.946 fused_ordering(955) 00:18:36.946 fused_ordering(956) 00:18:36.946 fused_ordering(957) 00:18:36.946 fused_ordering(958) 00:18:36.946 fused_ordering(959) 00:18:36.946 fused_ordering(960) 00:18:36.946 fused_ordering(961) 00:18:36.946 fused_ordering(962) 00:18:36.946 fused_ordering(963) 00:18:36.946 fused_ordering(964) 00:18:36.946 fused_ordering(965) 00:18:36.946 fused_ordering(966) 00:18:36.946 fused_ordering(967) 00:18:36.946 fused_ordering(968) 00:18:36.946 fused_ordering(969) 00:18:36.946 fused_ordering(970) 00:18:36.946 fused_ordering(971) 00:18:36.946 fused_ordering(972) 00:18:36.946 fused_ordering(973) 00:18:36.946 fused_ordering(974) 00:18:36.946 fused_ordering(975) 00:18:36.946 fused_ordering(976) 00:18:36.946 fused_ordering(977) 00:18:36.946 fused_ordering(978) 00:18:36.946 fused_ordering(979) 00:18:36.946 fused_ordering(980) 00:18:36.946 fused_ordering(981) 00:18:36.946 fused_ordering(982) 00:18:36.946 fused_ordering(983) 00:18:36.946 fused_ordering(984) 00:18:36.946 fused_ordering(985) 00:18:36.946 fused_ordering(986) 00:18:36.946 fused_ordering(987) 00:18:36.946 fused_ordering(988) 00:18:36.946 fused_ordering(989) 00:18:36.946 fused_ordering(990) 00:18:36.946 fused_ordering(991) 00:18:36.946 fused_ordering(992) 00:18:36.946 fused_ordering(993) 00:18:36.946 fused_ordering(994) 00:18:36.946 fused_ordering(995) 00:18:36.946 fused_ordering(996) 00:18:36.946 fused_ordering(997) 00:18:36.946 fused_ordering(998) 00:18:36.946 fused_ordering(999) 00:18:36.946 fused_ordering(1000) 00:18:36.946 fused_ordering(1001) 00:18:36.946 fused_ordering(1002) 00:18:36.946 fused_ordering(1003) 00:18:36.946 fused_ordering(1004) 00:18:36.946 fused_ordering(1005) 00:18:36.946 fused_ordering(1006) 00:18:36.946 fused_ordering(1007) 00:18:36.946 fused_ordering(1008) 00:18:36.946 fused_ordering(1009) 00:18:36.946 fused_ordering(1010) 00:18:36.946 fused_ordering(1011) 00:18:36.946 fused_ordering(1012) 00:18:36.946 fused_ordering(1013) 00:18:36.946 fused_ordering(1014) 00:18:36.946 fused_ordering(1015) 00:18:36.946 fused_ordering(1016) 00:18:36.946 fused_ordering(1017) 00:18:36.946 fused_ordering(1018) 00:18:36.946 fused_ordering(1019) 00:18:36.946 fused_ordering(1020) 00:18:36.946 fused_ordering(1021) 00:18:36.946 fused_ordering(1022) 00:18:36.946 fused_ordering(1023) 00:18:36.946 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:36.946 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:36.946 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:36.946 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:36.946 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:36.947 rmmod nvme_tcp 00:18:36.947 rmmod nvme_fabrics 00:18:36.947 rmmod nvme_keyring 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2649415 ']' 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2649415 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2649415 ']' 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2649415 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2649415 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2649415' 00:18:36.947 killing process with pid 2649415 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2649415 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2649415 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.947 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.508 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.508 00:18:39.508 real 0m13.601s 00:18:39.508 user 0m7.171s 00:18:39.508 sys 0m7.299s 00:18:39.509 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:39.509 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:39.509 ************************************ 00:18:39.509 END TEST nvmf_fused_ordering 00:18:39.509 ************************************ 00:18:39.509 06:29:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:39.509 06:29:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:39.509 06:29:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:39.509 06:29:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.509 ************************************ 00:18:39.509 START TEST nvmf_ns_masking 00:18:39.509 ************************************ 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:39.509 * Looking for test storage... 00:18:39.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.509 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.510 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:39.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.511 --rc genhtml_branch_coverage=1 00:18:39.511 --rc genhtml_function_coverage=1 00:18:39.511 --rc genhtml_legend=1 00:18:39.511 --rc geninfo_all_blocks=1 00:18:39.511 --rc geninfo_unexecuted_blocks=1 00:18:39.511 00:18:39.511 ' 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:39.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.511 --rc genhtml_branch_coverage=1 00:18:39.511 --rc genhtml_function_coverage=1 00:18:39.511 --rc genhtml_legend=1 00:18:39.511 --rc geninfo_all_blocks=1 00:18:39.511 --rc geninfo_unexecuted_blocks=1 00:18:39.511 00:18:39.511 ' 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:39.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.511 --rc genhtml_branch_coverage=1 00:18:39.511 --rc genhtml_function_coverage=1 00:18:39.511 --rc genhtml_legend=1 00:18:39.511 --rc geninfo_all_blocks=1 00:18:39.511 --rc geninfo_unexecuted_blocks=1 00:18:39.511 00:18:39.511 ' 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:39.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.511 --rc genhtml_branch_coverage=1 00:18:39.511 --rc genhtml_function_coverage=1 00:18:39.511 --rc genhtml_legend=1 00:18:39.511 --rc geninfo_all_blocks=1 00:18:39.511 --rc geninfo_unexecuted_blocks=1 00:18:39.511 00:18:39.511 ' 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.511 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e45b5169-1100-4da1-8edc-cdcb74372a78 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a145fe07-fbc5-4d2b-a04a-053e672f95c2 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2502b4f3-a9a3-4f0a-96d6-816b45e7d9c9 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:39.512 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:47.656 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.656 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:47.656 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:47.656 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:47.657 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:47.657 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:47.657 Found net devices under 0000:31:00.0: cvl_0_0 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:47.657 Found net devices under 0000:31:00.1: cvl_0_1 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:47.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:18:47.657 00:18:47.657 --- 10.0.0.2 ping statistics --- 00:18:47.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.657 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:18:47.657 00:18:47.657 --- 10.0.0.1 ping statistics --- 00:18:47.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.657 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.657 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2654587 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2654587 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2654587 ']' 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.658 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:47.658 [2024-11-20 06:30:06.968345] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:18:47.658 [2024-11-20 06:30:06.968412] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.658 [2024-11-20 06:30:07.068560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.658 [2024-11-20 06:30:07.119812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.658 [2024-11-20 06:30:07.119868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.658 [2024-11-20 06:30:07.119876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.658 [2024-11-20 06:30:07.119883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.658 [2024-11-20 06:30:07.119889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.658 [2024-11-20 06:30:07.120715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.919 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:47.919 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:47.919 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.919 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.919 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:48.180 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.180 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:48.180 [2024-11-20 06:30:08.004717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.180 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:48.180 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:48.180 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:48.440 Malloc1 00:18:48.440 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:48.702 Malloc2 00:18:48.702 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.962 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:48.962 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.223 [2024-11-20 06:30:09.028338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.223 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:49.223 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2502b4f3-a9a3-4f0a-96d6-816b45e7d9c9 -a 10.0.0.2 -s 4420 -i 4 00:18:49.485 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:49.485 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:49.485 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.485 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:49.485 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:51.401 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.662 [ 0]:0x1 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6886c0905a5e448e98c932c064c0aa3d 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6886c0905a5e448e98c932c064c0aa3d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.662 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:51.924 [ 0]:0x1 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6886c0905a5e448e98c932c064c0aa3d 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6886c0905a5e448e98c932c064c0aa3d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:51.924 [ 1]:0x2 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=906cac251b394636a95ccbaf0260e76a 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 906cac251b394636a95ccbaf0260e76a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:51.924 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:52.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.185 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:52.185 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2502b4f3-a9a3-4f0a-96d6-816b45e7d9c9 -a 10.0.0.2 -s 4420 -i 4 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:18:52.446 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:54.993 [ 0]:0x2 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=906cac251b394636a95ccbaf0260e76a 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 906cac251b394636a95ccbaf0260e76a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:54.993 [ 0]:0x1 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6886c0905a5e448e98c932c064c0aa3d 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6886c0905a5e448e98c932c064c0aa3d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:54.993 [ 1]:0x2 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=906cac251b394636a95ccbaf0260e76a 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 906cac251b394636a95ccbaf0260e76a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:54.993 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:55.255 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:55.256 [ 0]:0x2 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=906cac251b394636a95ccbaf0260e76a 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 906cac251b394636a95ccbaf0260e76a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:55.256 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:55.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.516 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:55.516 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:55.516 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2502b4f3-a9a3-4f0a-96d6-816b45e7d9c9 -a 10.0.0.2 -s 4420 -i 4 00:18:55.775 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:55.775 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:18:55.775 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.775 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:18:55.775 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:18:55.775 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:58.330 [ 0]:0x1 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6886c0905a5e448e98c932c064c0aa3d 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6886c0905a5e448e98c932c064c0aa3d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:58.330 [ 1]:0x2 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=906cac251b394636a95ccbaf0260e76a 00:18:58.330 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 906cac251b394636a95ccbaf0260e76a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.331 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:58.331 [ 0]:0x2 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:58.331 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=906cac251b394636a95ccbaf0260e76a 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 906cac251b394636a95ccbaf0260e76a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:58.592 [2024-11-20 06:30:18.445873] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:58.592 request: 00:18:58.592 { 00:18:58.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.592 "nsid": 2, 00:18:58.592 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.592 "method": "nvmf_ns_remove_host", 00:18:58.592 "req_id": 1 00:18:58.592 } 00:18:58.592 Got JSON-RPC error response 00:18:58.592 response: 00:18:58.592 { 00:18:58.592 "code": -32602, 00:18:58.592 "message": "Invalid parameters" 00:18:58.592 } 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:58.592 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:58.854 [ 0]:0x2 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=906cac251b394636a95ccbaf0260e76a 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 906cac251b394636a95ccbaf0260e76a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2657260 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2657260 /var/tmp/host.sock 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2657260 ']' 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:58.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:58.854 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:58.854 [2024-11-20 06:30:18.689717] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:18:58.854 [2024-11-20 06:30:18.689772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657260 ] 00:18:59.116 [2024-11-20 06:30:18.777670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.116 [2024-11-20 06:30:18.813716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.689 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:59.689 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:18:59.689 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:59.949 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:59.949 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e45b5169-1100-4da1-8edc-cdcb74372a78 00:18:59.949 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:59.949 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E45B516911004DA18EDCCDCB74372A78 -i 00:19:00.228 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a145fe07-fbc5-4d2b-a04a-053e672f95c2 00:19:00.228 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:00.228 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A145FE07FBC54D2BA04A053E672F95C2 -i 00:19:00.537 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:00.537 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:00.823 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:00.823 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:01.116 nvme0n1 00:19:01.116 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:01.116 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:01.377 nvme1n2 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:01.377 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:01.639 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e45b5169-1100-4da1-8edc-cdcb74372a78 == \e\4\5\b\5\1\6\9\-\1\1\0\0\-\4\d\a\1\-\8\e\d\c\-\c\d\c\b\7\4\3\7\2\a\7\8 ]] 00:19:01.639 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:01.639 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:01.639 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:01.900 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a145fe07-fbc5-4d2b-a04a-053e672f95c2 == \a\1\4\5\f\e\0\7\-\f\b\c\5\-\4\d\2\b\-\a\0\4\a\-\0\5\3\e\6\7\2\f\9\5\c\2 ]] 00:19:01.900 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:02.161 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e45b5169-1100-4da1-8edc-cdcb74372a78 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E45B516911004DA18EDCCDCB74372A78 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E45B516911004DA18EDCCDCB74372A78 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:02.161 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E45B516911004DA18EDCCDCB74372A78 00:19:02.424 [2024-11-20 06:30:22.180086] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:02.424 [2024-11-20 06:30:22.180113] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:02.424 [2024-11-20 06:30:22.180120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.424 request: 00:19:02.424 { 00:19:02.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.424 "namespace": { 00:19:02.424 "bdev_name": "invalid", 00:19:02.424 "nsid": 1, 00:19:02.424 "nguid": "E45B516911004DA18EDCCDCB74372A78", 00:19:02.424 "no_auto_visible": false 00:19:02.424 }, 00:19:02.424 "method": "nvmf_subsystem_add_ns", 00:19:02.424 "req_id": 1 00:19:02.424 } 00:19:02.424 Got JSON-RPC error response 00:19:02.424 response: 00:19:02.424 { 00:19:02.424 "code": -32602, 00:19:02.424 "message": "Invalid parameters" 00:19:02.424 } 00:19:02.424 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:02.424 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:02.424 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:02.424 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:02.424 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e45b5169-1100-4da1-8edc-cdcb74372a78 00:19:02.424 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:02.424 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E45B516911004DA18EDCCDCB74372A78 -i 00:19:02.686 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:04.601 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:04.601 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:04.601 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2657260 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2657260 ']' 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2657260 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2657260 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2657260' 00:19:04.861 killing process with pid 2657260 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2657260 00:19:04.861 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2657260 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.138 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:05.138 rmmod nvme_tcp 00:19:05.138 rmmod nvme_fabrics 00:19:05.138 rmmod nvme_keyring 00:19:05.138 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.138 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:05.138 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2654587 ']' 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2654587 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2654587 ']' 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2654587 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2654587 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2654587' 00:19:05.401 killing process with pid 2654587 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2654587 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2654587 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.401 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:07.950 00:19:07.950 real 0m28.320s 00:19:07.950 user 0m32.053s 00:19:07.950 sys 0m8.308s 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:07.950 ************************************ 00:19:07.950 END TEST nvmf_ns_masking 00:19:07.950 ************************************ 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.950 ************************************ 00:19:07.950 START TEST nvmf_nvme_cli 00:19:07.950 ************************************ 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:07.950 * Looking for test storage... 00:19:07.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.950 --rc genhtml_branch_coverage=1 00:19:07.950 --rc genhtml_function_coverage=1 00:19:07.950 --rc genhtml_legend=1 00:19:07.950 --rc geninfo_all_blocks=1 00:19:07.950 --rc geninfo_unexecuted_blocks=1 00:19:07.950 00:19:07.950 ' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.950 --rc genhtml_branch_coverage=1 00:19:07.950 --rc genhtml_function_coverage=1 00:19:07.950 --rc genhtml_legend=1 00:19:07.950 --rc geninfo_all_blocks=1 00:19:07.950 --rc geninfo_unexecuted_blocks=1 00:19:07.950 00:19:07.950 ' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.950 --rc genhtml_branch_coverage=1 00:19:07.950 --rc genhtml_function_coverage=1 00:19:07.950 --rc genhtml_legend=1 00:19:07.950 --rc geninfo_all_blocks=1 00:19:07.950 --rc geninfo_unexecuted_blocks=1 00:19:07.950 00:19:07.950 ' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:07.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.950 --rc genhtml_branch_coverage=1 00:19:07.950 --rc genhtml_function_coverage=1 00:19:07.950 --rc genhtml_legend=1 00:19:07.950 --rc geninfo_all_blocks=1 00:19:07.950 --rc geninfo_unexecuted_blocks=1 00:19:07.950 00:19:07.950 ' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.950 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.951 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.096 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:16.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:16.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:16.097 Found net devices under 0000:31:00.0: cvl_0_0 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:16.097 Found net devices under 0000:31:00.1: cvl_0_1 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.097 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:16.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:19:16.097 00:19:16.097 --- 10.0.0.2 ping statistics --- 00:19:16.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.097 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:19:16.097 00:19:16.097 --- 10.0.0.1 ping statistics --- 00:19:16.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.097 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.097 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2662995 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2662995 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2662995 ']' 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:16.098 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.098 [2024-11-20 06:30:35.352880] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:19:16.098 [2024-11-20 06:30:35.352946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.098 [2024-11-20 06:30:35.455613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.098 [2024-11-20 06:30:35.509370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.098 [2024-11-20 06:30:35.509426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.098 [2024-11-20 06:30:35.509440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.098 [2024-11-20 06:30:35.509447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.098 [2024-11-20 06:30:35.509453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.098 [2024-11-20 06:30:35.511545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.098 [2024-11-20 06:30:35.511706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.098 [2024-11-20 06:30:35.511864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.098 [2024-11-20 06:30:35.512005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.359 [2024-11-20 06:30:36.229807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.359 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.621 Malloc0 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.621 Malloc1 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.621 [2024-11-20 06:30:36.348430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:19:16.621 00:19:16.621 Discovery Log Number of Records 2, Generation counter 2 00:19:16.621 =====Discovery Log Entry 0====== 00:19:16.621 trtype: tcp 00:19:16.621 adrfam: ipv4 00:19:16.621 subtype: current discovery subsystem 00:19:16.621 treq: not required 00:19:16.621 portid: 0 00:19:16.621 trsvcid: 4420 00:19:16.621 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:16.621 traddr: 10.0.0.2 00:19:16.621 eflags: explicit discovery connections, duplicate discovery information 00:19:16.621 sectype: none 00:19:16.621 =====Discovery Log Entry 1====== 00:19:16.621 trtype: tcp 00:19:16.621 adrfam: ipv4 00:19:16.621 subtype: nvme subsystem 00:19:16.621 treq: not required 00:19:16.621 portid: 0 00:19:16.621 trsvcid: 4420 00:19:16.621 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:16.621 traddr: 10.0.0.2 00:19:16.621 eflags: none 00:19:16.621 sectype: none 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:16.621 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:18.537 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:18.537 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:19:18.537 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:18.537 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:18.537 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:18.537 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:20.471 /dev/nvme0n2 ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:20.471 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:20.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:20.733 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.734 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.734 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:20.995 rmmod nvme_tcp 00:19:20.995 rmmod nvme_fabrics 00:19:20.995 rmmod nvme_keyring 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2662995 ']' 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2662995 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2662995 ']' 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2662995 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2662995 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2662995' 00:19:20.995 killing process with pid 2662995 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2662995 00:19:20.995 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2662995 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.256 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.171 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:23.171 00:19:23.171 real 0m15.589s 00:19:23.171 user 0m23.756s 00:19:23.171 sys 0m6.552s 00:19:23.171 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.171 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:23.171 ************************************ 00:19:23.171 END TEST nvmf_nvme_cli 00:19:23.171 ************************************ 00:19:23.171 06:30:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:23.171 06:30:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:23.171 06:30:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:23.171 06:30:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.171 06:30:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:23.171 ************************************ 00:19:23.171 START TEST nvmf_vfio_user 00:19:23.171 ************************************ 00:19:23.171 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:23.433 * Looking for test storage... 00:19:23.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:23.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.433 --rc genhtml_branch_coverage=1 00:19:23.433 --rc genhtml_function_coverage=1 00:19:23.433 --rc genhtml_legend=1 00:19:23.433 --rc geninfo_all_blocks=1 00:19:23.433 --rc geninfo_unexecuted_blocks=1 00:19:23.433 00:19:23.433 ' 00:19:23.433 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:23.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.433 --rc genhtml_branch_coverage=1 00:19:23.433 --rc genhtml_function_coverage=1 00:19:23.433 --rc genhtml_legend=1 00:19:23.433 --rc geninfo_all_blocks=1 00:19:23.434 --rc geninfo_unexecuted_blocks=1 00:19:23.434 00:19:23.434 ' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:23.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.434 --rc genhtml_branch_coverage=1 00:19:23.434 --rc genhtml_function_coverage=1 00:19:23.434 --rc genhtml_legend=1 00:19:23.434 --rc geninfo_all_blocks=1 00:19:23.434 --rc geninfo_unexecuted_blocks=1 00:19:23.434 00:19:23.434 ' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:23.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.434 --rc genhtml_branch_coverage=1 00:19:23.434 --rc genhtml_function_coverage=1 00:19:23.434 --rc genhtml_legend=1 00:19:23.434 --rc geninfo_all_blocks=1 00:19:23.434 --rc geninfo_unexecuted_blocks=1 00:19:23.434 00:19:23.434 ' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:23.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2664546 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2664546' 00:19:23.434 Process pid: 2664546 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2664546 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2664546 ']' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.434 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.435 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.435 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.435 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:23.696 [2024-11-20 06:30:43.381888] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:19:23.696 [2024-11-20 06:30:43.381960] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.696 [2024-11-20 06:30:43.471009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.696 [2024-11-20 06:30:43.505852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.696 [2024-11-20 06:30:43.505887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.696 [2024-11-20 06:30:43.505893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.696 [2024-11-20 06:30:43.505898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.696 [2024-11-20 06:30:43.505902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.696 [2024-11-20 06:30:43.507391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.696 [2024-11-20 06:30:43.507546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.696 [2024-11-20 06:30:43.507697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.696 [2024-11-20 06:30:43.507698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.638 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.638 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:19:24.638 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:25.581 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:25.581 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:25.581 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:25.581 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:25.581 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:25.581 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:25.841 Malloc1 00:19:25.841 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:26.101 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:26.101 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:26.362 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:26.362 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:26.362 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:26.622 Malloc2 00:19:26.623 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:26.623 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:26.883 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:27.145 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:27.146 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:27.146 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:27.146 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:27.146 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:27.146 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:27.146 [2024-11-20 06:30:46.919750] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:19:27.146 [2024-11-20 06:30:46.919794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665310 ] 00:19:27.146 [2024-11-20 06:30:46.959056] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:27.146 [2024-11-20 06:30:46.968036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:27.146 [2024-11-20 06:30:46.968053] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3c8bbf8000 00:19:27.146 [2024-11-20 06:30:46.969042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.970040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.971042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.972052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.973057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.974061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.975069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.976072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:27.146 [2024-11-20 06:30:46.977082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:27.146 [2024-11-20 06:30:46.977090] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3c8bbed000 00:19:27.146 [2024-11-20 06:30:46.978006] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:27.146 [2024-11-20 06:30:46.987454] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:27.146 [2024-11-20 06:30:46.987477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:19:27.146 [2024-11-20 06:30:46.992190] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:27.146 [2024-11-20 06:30:46.992227] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:27.146 [2024-11-20 06:30:46.992290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:19:27.146 [2024-11-20 06:30:46.992304] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:19:27.146 [2024-11-20 06:30:46.992311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:19:27.146 [2024-11-20 06:30:46.993190] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:27.146 [2024-11-20 06:30:46.993197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:19:27.146 [2024-11-20 06:30:46.993202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:19:27.146 [2024-11-20 06:30:46.994195] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:27.146 [2024-11-20 06:30:46.994201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:19:27.146 [2024-11-20 06:30:46.994207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:27.146 [2024-11-20 06:30:46.995199] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:27.146 [2024-11-20 06:30:46.995205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:27.146 [2024-11-20 06:30:46.996211] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:27.146 [2024-11-20 06:30:46.996217] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:27.146 [2024-11-20 06:30:46.996221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:27.146 [2024-11-20 06:30:46.996226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:27.146 [2024-11-20 06:30:46.996332] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:19:27.146 [2024-11-20 06:30:46.996336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:27.146 [2024-11-20 06:30:46.996340] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:27.146 [2024-11-20 06:30:46.997216] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:27.146 [2024-11-20 06:30:46.998217] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:27.146 [2024-11-20 06:30:46.999222] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:27.146 [2024-11-20 06:30:47.000220] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:27.146 [2024-11-20 06:30:47.000271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:27.146 [2024-11-20 06:30:47.001234] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:27.146 [2024-11-20 06:30:47.001240] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:27.146 [2024-11-20 06:30:47.001244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:27.146 [2024-11-20 06:30:47.001261] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:19:27.146 [2024-11-20 06:30:47.001267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:27.146 [2024-11-20 06:30:47.001279] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:27.146 [2024-11-20 06:30:47.001283] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:27.146 [2024-11-20 06:30:47.001286] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:27.146 [2024-11-20 06:30:47.001297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:27.146 [2024-11-20 06:30:47.001333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:27.146 [2024-11-20 06:30:47.001341] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:19:27.146 [2024-11-20 06:30:47.001345] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:19:27.146 [2024-11-20 06:30:47.001348] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:19:27.146 [2024-11-20 06:30:47.001351] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:27.146 [2024-11-20 06:30:47.001359] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:19:27.146 [2024-11-20 06:30:47.001362] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:19:27.147 [2024-11-20 06:30:47.001366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.147 [2024-11-20 06:30:47.001409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.147 [2024-11-20 06:30:47.001415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.147 [2024-11-20 06:30:47.001421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.147 [2024-11-20 06:30:47.001425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001451] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:19:27.147 [2024-11-20 06:30:47.001455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001538] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:27.147 [2024-11-20 06:30:47.001541] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:27.147 [2024-11-20 06:30:47.001544] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:27.147 [2024-11-20 06:30:47.001548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001566] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:19:27.147 [2024-11-20 06:30:47.001574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001585] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:27.147 [2024-11-20 06:30:47.001588] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:27.147 [2024-11-20 06:30:47.001590] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:27.147 [2024-11-20 06:30:47.001594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001635] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:27.147 [2024-11-20 06:30:47.001638] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:27.147 [2024-11-20 06:30:47.001640] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:27.147 [2024-11-20 06:30:47.001646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001690] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:27.147 [2024-11-20 06:30:47.001693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:19:27.147 [2024-11-20 06:30:47.001697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:19:27.147 [2024-11-20 06:30:47.001711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:27.147 [2024-11-20 06:30:47.001776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:27.147 [2024-11-20 06:30:47.001786] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:27.147 [2024-11-20 06:30:47.001789] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:27.147 [2024-11-20 06:30:47.001792] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:27.147 [2024-11-20 06:30:47.001794] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:27.147 [2024-11-20 06:30:47.001797] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:27.147 [2024-11-20 06:30:47.001802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:27.147 [2024-11-20 06:30:47.001807] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:27.147 [2024-11-20 06:30:47.001812] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:27.147 [2024-11-20 06:30:47.001814] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:27.148 [2024-11-20 06:30:47.001818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:27.148 [2024-11-20 06:30:47.001824] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:27.148 [2024-11-20 06:30:47.001827] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:27.148 [2024-11-20 06:30:47.001829] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:27.148 [2024-11-20 06:30:47.001833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:27.148 [2024-11-20 06:30:47.001839] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:27.148 [2024-11-20 06:30:47.001842] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:27.148 [2024-11-20 06:30:47.001844] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:27.148 [2024-11-20 06:30:47.001848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:27.148 [2024-11-20 06:30:47.001853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:27.148 [2024-11-20 06:30:47.001862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:27.148 [2024-11-20 06:30:47.001871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:27.148 [2024-11-20 06:30:47.001877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:27.148 ===================================================== 00:19:27.148 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:27.148 ===================================================== 00:19:27.148 Controller Capabilities/Features 00:19:27.148 ================================ 00:19:27.148 Vendor ID: 4e58 00:19:27.148 Subsystem Vendor ID: 4e58 00:19:27.148 Serial Number: SPDK1 00:19:27.148 Model Number: SPDK bdev Controller 00:19:27.148 Firmware Version: 25.01 00:19:27.148 Recommended Arb Burst: 6 00:19:27.148 IEEE OUI Identifier: 8d 6b 50 00:19:27.148 Multi-path I/O 00:19:27.148 May have multiple subsystem ports: Yes 00:19:27.148 May have multiple controllers: Yes 00:19:27.148 Associated with SR-IOV VF: No 00:19:27.148 Max Data Transfer Size: 131072 00:19:27.148 Max Number of Namespaces: 32 00:19:27.148 Max Number of I/O Queues: 127 00:19:27.148 NVMe Specification Version (VS): 1.3 00:19:27.148 NVMe Specification Version (Identify): 1.3 00:19:27.148 Maximum Queue Entries: 256 00:19:27.148 Contiguous Queues Required: Yes 00:19:27.148 Arbitration Mechanisms Supported 00:19:27.148 Weighted Round Robin: Not Supported 00:19:27.148 Vendor Specific: Not Supported 00:19:27.148 Reset Timeout: 15000 ms 00:19:27.148 Doorbell Stride: 4 bytes 00:19:27.148 NVM Subsystem Reset: Not Supported 00:19:27.148 Command Sets Supported 00:19:27.148 NVM Command Set: Supported 00:19:27.148 Boot Partition: Not Supported 00:19:27.148 Memory Page Size Minimum: 4096 bytes 00:19:27.148 Memory Page Size Maximum: 4096 bytes 00:19:27.148 Persistent Memory Region: Not Supported 00:19:27.148 Optional Asynchronous Events Supported 00:19:27.148 Namespace Attribute Notices: Supported 00:19:27.148 Firmware Activation Notices: Not Supported 00:19:27.148 ANA Change Notices: Not Supported 00:19:27.148 PLE Aggregate Log Change Notices: Not Supported 00:19:27.148 LBA Status Info Alert Notices: Not Supported 00:19:27.148 EGE Aggregate Log Change Notices: Not Supported 00:19:27.148 Normal NVM Subsystem Shutdown event: Not Supported 00:19:27.148 Zone Descriptor Change Notices: Not Supported 00:19:27.148 Discovery Log Change Notices: Not Supported 00:19:27.148 Controller Attributes 00:19:27.148 128-bit Host Identifier: Supported 00:19:27.148 Non-Operational Permissive Mode: Not Supported 00:19:27.148 NVM Sets: Not Supported 00:19:27.148 Read Recovery Levels: Not Supported 00:19:27.148 Endurance Groups: Not Supported 00:19:27.148 Predictable Latency Mode: Not Supported 00:19:27.148 Traffic Based Keep ALive: Not Supported 00:19:27.148 Namespace Granularity: Not Supported 00:19:27.148 SQ Associations: Not Supported 00:19:27.148 UUID List: Not Supported 00:19:27.148 Multi-Domain Subsystem: Not Supported 00:19:27.148 Fixed Capacity Management: Not Supported 00:19:27.148 Variable Capacity Management: Not Supported 00:19:27.148 Delete Endurance Group: Not Supported 00:19:27.148 Delete NVM Set: Not Supported 00:19:27.148 Extended LBA Formats Supported: Not Supported 00:19:27.148 Flexible Data Placement Supported: Not Supported 00:19:27.148 00:19:27.148 Controller Memory Buffer Support 00:19:27.148 ================================ 00:19:27.148 Supported: No 00:19:27.148 00:19:27.148 Persistent Memory Region Support 00:19:27.148 ================================ 00:19:27.148 Supported: No 00:19:27.148 00:19:27.148 Admin Command Set Attributes 00:19:27.148 ============================ 00:19:27.148 Security Send/Receive: Not Supported 00:19:27.148 Format NVM: Not Supported 00:19:27.148 Firmware Activate/Download: Not Supported 00:19:27.148 Namespace Management: Not Supported 00:19:27.148 Device Self-Test: Not Supported 00:19:27.148 Directives: Not Supported 00:19:27.148 NVMe-MI: Not Supported 00:19:27.148 Virtualization Management: Not Supported 00:19:27.148 Doorbell Buffer Config: Not Supported 00:19:27.148 Get LBA Status Capability: Not Supported 00:19:27.148 Command & Feature Lockdown Capability: Not Supported 00:19:27.148 Abort Command Limit: 4 00:19:27.148 Async Event Request Limit: 4 00:19:27.148 Number of Firmware Slots: N/A 00:19:27.148 Firmware Slot 1 Read-Only: N/A 00:19:27.148 Firmware Activation Without Reset: N/A 00:19:27.148 Multiple Update Detection Support: N/A 00:19:27.148 Firmware Update Granularity: No Information Provided 00:19:27.148 Per-Namespace SMART Log: No 00:19:27.148 Asymmetric Namespace Access Log Page: Not Supported 00:19:27.148 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:27.148 Command Effects Log Page: Supported 00:19:27.148 Get Log Page Extended Data: Supported 00:19:27.148 Telemetry Log Pages: Not Supported 00:19:27.148 Persistent Event Log Pages: Not Supported 00:19:27.148 Supported Log Pages Log Page: May Support 00:19:27.148 Commands Supported & Effects Log Page: Not Supported 00:19:27.148 Feature Identifiers & Effects Log Page:May Support 00:19:27.148 NVMe-MI Commands & Effects Log Page: May Support 00:19:27.148 Data Area 4 for Telemetry Log: Not Supported 00:19:27.148 Error Log Page Entries Supported: 128 00:19:27.148 Keep Alive: Supported 00:19:27.148 Keep Alive Granularity: 10000 ms 00:19:27.148 00:19:27.148 NVM Command Set Attributes 00:19:27.148 ========================== 00:19:27.148 Submission Queue Entry Size 00:19:27.148 Max: 64 00:19:27.148 Min: 64 00:19:27.148 Completion Queue Entry Size 00:19:27.148 Max: 16 00:19:27.148 Min: 16 00:19:27.148 Number of Namespaces: 32 00:19:27.148 Compare Command: Supported 00:19:27.148 Write Uncorrectable Command: Not Supported 00:19:27.148 Dataset Management Command: Supported 00:19:27.148 Write Zeroes Command: Supported 00:19:27.148 Set Features Save Field: Not Supported 00:19:27.148 Reservations: Not Supported 00:19:27.148 Timestamp: Not Supported 00:19:27.148 Copy: Supported 00:19:27.148 Volatile Write Cache: Present 00:19:27.148 Atomic Write Unit (Normal): 1 00:19:27.148 Atomic Write Unit (PFail): 1 00:19:27.148 Atomic Compare & Write Unit: 1 00:19:27.148 Fused Compare & Write: Supported 00:19:27.148 Scatter-Gather List 00:19:27.148 SGL Command Set: Supported (Dword aligned) 00:19:27.148 SGL Keyed: Not Supported 00:19:27.148 SGL Bit Bucket Descriptor: Not Supported 00:19:27.148 SGL Metadata Pointer: Not Supported 00:19:27.148 Oversized SGL: Not Supported 00:19:27.148 SGL Metadata Address: Not Supported 00:19:27.148 SGL Offset: Not Supported 00:19:27.148 Transport SGL Data Block: Not Supported 00:19:27.148 Replay Protected Memory Block: Not Supported 00:19:27.148 00:19:27.148 Firmware Slot Information 00:19:27.148 ========================= 00:19:27.148 Active slot: 1 00:19:27.148 Slot 1 Firmware Revision: 25.01 00:19:27.148 00:19:27.148 00:19:27.148 Commands Supported and Effects 00:19:27.148 ============================== 00:19:27.148 Admin Commands 00:19:27.148 -------------- 00:19:27.148 Get Log Page (02h): Supported 00:19:27.148 Identify (06h): Supported 00:19:27.148 Abort (08h): Supported 00:19:27.148 Set Features (09h): Supported 00:19:27.148 Get Features (0Ah): Supported 00:19:27.148 Asynchronous Event Request (0Ch): Supported 00:19:27.148 Keep Alive (18h): Supported 00:19:27.148 I/O Commands 00:19:27.148 ------------ 00:19:27.148 Flush (00h): Supported LBA-Change 00:19:27.148 Write (01h): Supported LBA-Change 00:19:27.148 Read (02h): Supported 00:19:27.148 Compare (05h): Supported 00:19:27.148 Write Zeroes (08h): Supported LBA-Change 00:19:27.148 Dataset Management (09h): Supported LBA-Change 00:19:27.148 Copy (19h): Supported LBA-Change 00:19:27.148 00:19:27.148 Error Log 00:19:27.148 ========= 00:19:27.148 00:19:27.148 Arbitration 00:19:27.148 =========== 00:19:27.148 Arbitration Burst: 1 00:19:27.148 00:19:27.148 Power Management 00:19:27.148 ================ 00:19:27.148 Number of Power States: 1 00:19:27.148 Current Power State: Power State #0 00:19:27.148 Power State #0: 00:19:27.148 Max Power: 0.00 W 00:19:27.148 Non-Operational State: Operational 00:19:27.148 Entry Latency: Not Reported 00:19:27.148 Exit Latency: Not Reported 00:19:27.148 Relative Read Throughput: 0 00:19:27.148 Relative Read Latency: 0 00:19:27.148 Relative Write Throughput: 0 00:19:27.148 Relative Write Latency: 0 00:19:27.148 Idle Power: Not Reported 00:19:27.148 Active Power: Not Reported 00:19:27.148 Non-Operational Permissive Mode: Not Supported 00:19:27.148 00:19:27.148 Health Information 00:19:27.148 ================== 00:19:27.148 Critical Warnings: 00:19:27.148 Available Spare Space: OK 00:19:27.148 Temperature: OK 00:19:27.148 Device Reliability: OK 00:19:27.148 Read Only: No 00:19:27.149 Volatile Memory Backup: OK 00:19:27.149 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:27.149 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:27.149 Available Spare: 0% 00:19:27.149 Available Sp[2024-11-20 06:30:47.001952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:27.149 [2024-11-20 06:30:47.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:27.149 [2024-11-20 06:30:47.001982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:19:27.149 [2024-11-20 06:30:47.001989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.149 [2024-11-20 06:30:47.001994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.149 [2024-11-20 06:30:47.001999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.149 [2024-11-20 06:30:47.002003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.149 [2024-11-20 06:30:47.002238] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:27.149 [2024-11-20 06:30:47.002246] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:27.149 [2024-11-20 06:30:47.003241] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:27.149 [2024-11-20 06:30:47.003280] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:19:27.149 [2024-11-20 06:30:47.003285] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:19:27.149 [2024-11-20 06:30:47.004245] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:27.149 [2024-11-20 06:30:47.004256] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:19:27.149 [2024-11-20 06:30:47.004310] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:27.149 [2024-11-20 06:30:47.006752] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:27.149 are Threshold: 0% 00:19:27.149 Life Percentage Used: 0% 00:19:27.149 Data Units Read: 0 00:19:27.149 Data Units Written: 0 00:19:27.149 Host Read Commands: 0 00:19:27.149 Host Write Commands: 0 00:19:27.149 Controller Busy Time: 0 minutes 00:19:27.149 Power Cycles: 0 00:19:27.149 Power On Hours: 0 hours 00:19:27.149 Unsafe Shutdowns: 0 00:19:27.149 Unrecoverable Media Errors: 0 00:19:27.149 Lifetime Error Log Entries: 0 00:19:27.149 Warning Temperature Time: 0 minutes 00:19:27.149 Critical Temperature Time: 0 minutes 00:19:27.149 00:19:27.149 Number of Queues 00:19:27.149 ================ 00:19:27.149 Number of I/O Submission Queues: 127 00:19:27.149 Number of I/O Completion Queues: 127 00:19:27.149 00:19:27.149 Active Namespaces 00:19:27.149 ================= 00:19:27.149 Namespace ID:1 00:19:27.149 Error Recovery Timeout: Unlimited 00:19:27.149 Command Set Identifier: NVM (00h) 00:19:27.149 Deallocate: Supported 00:19:27.149 Deallocated/Unwritten Error: Not Supported 00:19:27.149 Deallocated Read Value: Unknown 00:19:27.149 Deallocate in Write Zeroes: Not Supported 00:19:27.149 Deallocated Guard Field: 0xFFFF 00:19:27.149 Flush: Supported 00:19:27.149 Reservation: Supported 00:19:27.149 Namespace Sharing Capabilities: Multiple Controllers 00:19:27.149 Size (in LBAs): 131072 (0GiB) 00:19:27.149 Capacity (in LBAs): 131072 (0GiB) 00:19:27.149 Utilization (in LBAs): 131072 (0GiB) 00:19:27.149 NGUID: CD8BA188EE6E472C94824EF81A4A118B 00:19:27.149 UUID: cd8ba188-ee6e-472c-9482-4ef81a4a118b 00:19:27.149 Thin Provisioning: Not Supported 00:19:27.149 Per-NS Atomic Units: Yes 00:19:27.149 Atomic Boundary Size (Normal): 0 00:19:27.149 Atomic Boundary Size (PFail): 0 00:19:27.149 Atomic Boundary Offset: 0 00:19:27.149 Maximum Single Source Range Length: 65535 00:19:27.149 Maximum Copy Length: 65535 00:19:27.149 Maximum Source Range Count: 1 00:19:27.149 NGUID/EUI64 Never Reused: No 00:19:27.149 Namespace Write Protected: No 00:19:27.149 Number of LBA Formats: 1 00:19:27.149 Current LBA Format: LBA Format #00 00:19:27.149 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:27.149 00:19:27.149 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:27.410 [2024-11-20 06:30:47.195418] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:32.697 Initializing NVMe Controllers 00:19:32.697 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:32.697 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:32.697 Initialization complete. Launching workers. 00:19:32.697 ======================================================== 00:19:32.697 Latency(us) 00:19:32.697 Device Information : IOPS MiB/s Average min max 00:19:32.697 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39944.87 156.03 3204.09 848.15 6807.77 00:19:32.697 ======================================================== 00:19:32.697 Total : 39944.87 156.03 3204.09 848.15 6807.77 00:19:32.697 00:19:32.697 [2024-11-20 06:30:52.214337] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:32.697 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:32.697 [2024-11-20 06:30:52.402209] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:37.988 Initializing NVMe Controllers 00:19:37.988 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:37.988 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:37.988 Initialization complete. Launching workers. 00:19:37.988 ======================================================== 00:19:37.988 Latency(us) 00:19:37.988 Device Information : IOPS MiB/s Average min max 00:19:37.988 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7983.89 4986.66 10976.48 00:19:37.988 ======================================================== 00:19:37.988 Total : 16051.20 62.70 7983.89 4986.66 10976.48 00:19:37.988 00:19:37.988 [2024-11-20 06:30:57.438326] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:37.988 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:37.988 [2024-11-20 06:30:57.636178] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:43.272 [2024-11-20 06:31:02.705974] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:43.272 Initializing NVMe Controllers 00:19:43.272 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:43.272 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:43.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:43.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:43.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:43.272 Initialization complete. Launching workers. 00:19:43.272 Starting thread on core 2 00:19:43.272 Starting thread on core 3 00:19:43.272 Starting thread on core 1 00:19:43.272 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:43.272 [2024-11-20 06:31:02.960079] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:46.576 [2024-11-20 06:31:06.021078] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:46.576 Initializing NVMe Controllers 00:19:46.576 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:46.576 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:46.576 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:46.576 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:46.576 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:46.576 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:46.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:46.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:46.576 Initialization complete. Launching workers. 00:19:46.576 Starting thread on core 1 with urgent priority queue 00:19:46.576 Starting thread on core 2 with urgent priority queue 00:19:46.576 Starting thread on core 3 with urgent priority queue 00:19:46.576 Starting thread on core 0 with urgent priority queue 00:19:46.576 SPDK bdev Controller (SPDK1 ) core 0: 11049.00 IO/s 9.05 secs/100000 ios 00:19:46.576 SPDK bdev Controller (SPDK1 ) core 1: 13899.33 IO/s 7.19 secs/100000 ios 00:19:46.576 SPDK bdev Controller (SPDK1 ) core 2: 10555.33 IO/s 9.47 secs/100000 ios 00:19:46.576 SPDK bdev Controller (SPDK1 ) core 3: 17143.00 IO/s 5.83 secs/100000 ios 00:19:46.576 ======================================================== 00:19:46.576 00:19:46.576 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:46.576 [2024-11-20 06:31:06.262172] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:46.576 Initializing NVMe Controllers 00:19:46.576 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:46.576 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:46.576 Namespace ID: 1 size: 0GB 00:19:46.576 Initialization complete. 00:19:46.576 INFO: using host memory buffer for IO 00:19:46.576 Hello world! 00:19:46.576 [2024-11-20 06:31:06.296380] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:46.576 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:46.836 [2024-11-20 06:31:06.537158] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:47.778 Initializing NVMe Controllers 00:19:47.778 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:47.778 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:47.778 Initialization complete. Launching workers. 00:19:47.778 submit (in ns) avg, min, max = 6328.5, 2834.2, 4000215.0 00:19:47.778 complete (in ns) avg, min, max = 18665.2, 1645.8, 3999550.8 00:19:47.778 00:19:47.778 Submit histogram 00:19:47.778 ================ 00:19:47.778 Range in us Cumulative Count 00:19:47.778 2.827 - 2.840: 0.0739% ( 15) 00:19:47.778 2.840 - 2.853: 0.8817% ( 164) 00:19:47.778 2.853 - 2.867: 3.2412% ( 479) 00:19:47.778 2.867 - 2.880: 6.8913% ( 741) 00:19:47.778 2.880 - 2.893: 12.4378% ( 1126) 00:19:47.778 2.893 - 2.907: 18.0927% ( 1148) 00:19:47.778 2.907 - 2.920: 23.7919% ( 1157) 00:19:47.778 2.920 - 2.933: 28.6932% ( 995) 00:19:47.778 2.933 - 2.947: 34.4121% ( 1161) 00:19:47.778 2.947 - 2.960: 40.0522% ( 1145) 00:19:47.778 2.960 - 2.973: 45.5544% ( 1117) 00:19:47.778 2.973 - 2.987: 52.2684% ( 1363) 00:19:47.778 2.987 - 3.000: 60.7753% ( 1727) 00:19:47.778 3.000 - 3.013: 69.8291% ( 1838) 00:19:47.778 3.013 - 3.027: 77.4149% ( 1540) 00:19:47.778 3.027 - 3.040: 84.7298% ( 1485) 00:19:47.778 3.040 - 3.053: 90.4980% ( 1171) 00:19:47.778 3.053 - 3.067: 94.3747% ( 787) 00:19:47.778 3.067 - 3.080: 96.7144% ( 475) 00:19:47.778 3.080 - 3.093: 98.1725% ( 296) 00:19:47.778 3.093 - 3.107: 98.8966% ( 147) 00:19:47.778 3.107 - 3.120: 99.2759% ( 77) 00:19:47.778 3.120 - 3.133: 99.4532% ( 36) 00:19:47.778 3.133 - 3.147: 99.5320% ( 16) 00:19:47.778 3.147 - 3.160: 99.5616% ( 6) 00:19:47.778 3.160 - 3.173: 99.5961% ( 7) 00:19:47.778 3.173 - 3.187: 99.6010% ( 1) 00:19:47.778 3.200 - 3.213: 99.6109% ( 2) 00:19:47.778 3.240 - 3.253: 99.6158% ( 1) 00:19:47.778 3.440 - 3.467: 99.6207% ( 1) 00:19:47.778 3.493 - 3.520: 99.6256% ( 1) 00:19:47.778 3.573 - 3.600: 99.6306% ( 1) 00:19:47.778 3.680 - 3.707: 99.6355% ( 1) 00:19:47.778 3.787 - 3.813: 99.6453% ( 2) 00:19:47.778 3.973 - 4.000: 99.6503% ( 1) 00:19:47.778 4.053 - 4.080: 99.6552% ( 1) 00:19:47.778 4.213 - 4.240: 99.6601% ( 1) 00:19:47.778 4.267 - 4.293: 99.6650% ( 1) 00:19:47.778 4.347 - 4.373: 99.6700% ( 1) 00:19:47.778 4.480 - 4.507: 99.6798% ( 2) 00:19:47.778 4.587 - 4.613: 99.6847% ( 1) 00:19:47.778 4.667 - 4.693: 99.6946% ( 2) 00:19:47.778 4.800 - 4.827: 99.6995% ( 1) 00:19:47.778 4.827 - 4.853: 99.7044% ( 1) 00:19:47.778 4.853 - 4.880: 99.7143% ( 2) 00:19:47.778 4.907 - 4.933: 99.7242% ( 2) 00:19:47.778 4.933 - 4.960: 99.7291% ( 1) 00:19:47.778 4.960 - 4.987: 99.7340% ( 1) 00:19:47.778 5.040 - 5.067: 99.7389% ( 1) 00:19:47.778 5.067 - 5.093: 99.7439% ( 1) 00:19:47.778 5.120 - 5.147: 99.7488% ( 1) 00:19:47.778 5.733 - 5.760: 99.7537% ( 1) 00:19:47.778 5.813 - 5.840: 99.7586% ( 1) 00:19:47.778 5.840 - 5.867: 99.7636% ( 1) 00:19:47.779 5.867 - 5.893: 99.7685% ( 1) 00:19:47.779 5.893 - 5.920: 99.7783% ( 2) 00:19:47.779 5.920 - 5.947: 99.7833% ( 1) 00:19:47.779 5.947 - 5.973: 99.7882% ( 1) 00:19:47.779 6.133 - 6.160: 99.7931% ( 1) 00:19:47.779 6.187 - 6.213: 99.7980% ( 1) 00:19:47.779 6.213 - 6.240: 99.8030% ( 1) 00:19:47.779 6.240 - 6.267: 99.8079% ( 1) 00:19:47.779 6.267 - 6.293: 99.8128% ( 1) 00:19:47.779 6.293 - 6.320: 99.8177% ( 1) 00:19:47.779 6.427 - 6.453: 99.8227% ( 1) 00:19:47.779 6.453 - 6.480: 99.8276% ( 1) 00:19:47.779 6.507 - 6.533: 99.8325% ( 1) 00:19:47.779 6.560 - 6.587: 99.8374% ( 1) 00:19:47.779 6.667 - 6.693: 99.8424% ( 1) 00:19:47.779 6.720 - 6.747: 99.8473% ( 1) 00:19:47.779 6.747 - 6.773: 99.8522% ( 1) 00:19:47.779 6.800 - 6.827: 99.8571% ( 1) 00:19:47.779 6.827 - 6.880: 99.8670% ( 2) 00:19:47.779 6.880 - 6.933: 99.8719% ( 1) 00:19:47.779 6.933 - 6.987: 99.8769% ( 1) 00:19:47.779 6.987 - 7.040: 99.8818% ( 1) 00:19:47.779 7.413 - 7.467: 99.8867% ( 1) 00:19:47.779 7.733 - 7.787: 99.8916% ( 1) 00:19:47.779 7.840 - 7.893: 99.8966% ( 1) 00:19:47.779 [2024-11-20 06:31:07.556781] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:47.779 8.853 - 8.907: 99.9015% ( 1) 00:19:47.779 9.653 - 9.707: 99.9064% ( 1) 00:19:47.779 10.507 - 10.560: 99.9113% ( 1) 00:19:47.779 35.200 - 35.413: 99.9163% ( 1) 00:19:47.779 3986.773 - 4014.080: 100.0000% ( 17) 00:19:47.779 00:19:47.779 Complete histogram 00:19:47.779 ================== 00:19:47.779 Range in us Cumulative Count 00:19:47.779 1.640 - 1.647: 0.0049% ( 1) 00:19:47.779 1.653 - 1.660: 0.1527% ( 30) 00:19:47.779 1.660 - 1.667: 0.4286% ( 56) 00:19:47.779 1.667 - 1.673: 0.7192% ( 59) 00:19:47.779 1.673 - 1.680: 0.9359% ( 44) 00:19:47.779 1.680 - 1.687: 1.0541% ( 24) 00:19:47.779 1.687 - 1.693: 1.1379% ( 17) 00:19:47.779 1.693 - 1.700: 1.2019% ( 13) 00:19:47.779 1.700 - 1.707: 1.2709% ( 14) 00:19:47.779 1.707 - 1.720: 18.7035% ( 3539) 00:19:47.779 1.720 - 1.733: 53.6772% ( 7100) 00:19:47.779 1.733 - 1.747: 80.3310% ( 5411) 00:19:47.779 1.747 - 1.760: 92.5176% ( 2474) 00:19:47.779 1.760 - 1.773: 97.5026% ( 1012) 00:19:47.779 1.773 - 1.787: 98.9754% ( 299) 00:19:47.779 1.787 - 1.800: 99.2907% ( 64) 00:19:47.779 1.800 - 1.813: 99.3596% ( 14) 00:19:47.779 1.813 - 1.827: 99.3990% ( 8) 00:19:47.779 1.827 - 1.840: 99.4089% ( 2) 00:19:47.779 1.840 - 1.853: 99.4237% ( 3) 00:19:47.779 1.853 - 1.867: 99.4286% ( 1) 00:19:47.779 3.573 - 3.600: 99.4335% ( 1) 00:19:47.779 3.600 - 3.627: 99.4385% ( 1) 00:19:47.779 3.627 - 3.653: 99.4434% ( 1) 00:19:47.779 3.787 - 3.813: 99.4483% ( 1) 00:19:47.779 3.813 - 3.840: 99.4532% ( 1) 00:19:47.779 4.027 - 4.053: 99.4582% ( 1) 00:19:47.779 4.400 - 4.427: 99.4631% ( 1) 00:19:47.779 4.453 - 4.480: 99.4680% ( 1) 00:19:47.779 4.507 - 4.533: 99.4729% ( 1) 00:19:47.779 4.587 - 4.613: 99.4779% ( 1) 00:19:47.779 4.667 - 4.693: 99.4877% ( 2) 00:19:47.779 4.720 - 4.747: 99.4926% ( 1) 00:19:47.779 4.747 - 4.773: 99.4976% ( 1) 00:19:47.779 4.827 - 4.853: 99.5025% ( 1) 00:19:47.779 4.853 - 4.880: 99.5074% ( 1) 00:19:47.779 4.880 - 4.907: 99.5173% ( 2) 00:19:47.779 5.040 - 5.067: 99.5222% ( 1) 00:19:47.779 5.067 - 5.093: 99.5271% ( 1) 00:19:47.779 5.173 - 5.200: 99.5320% ( 1) 00:19:47.779 5.227 - 5.253: 99.5370% ( 1) 00:19:47.779 5.280 - 5.307: 99.5419% ( 1) 00:19:47.779 5.413 - 5.440: 99.5468% ( 1) 00:19:47.779 5.813 - 5.840: 99.5517% ( 1) 00:19:47.779 6.293 - 6.320: 99.5567% ( 1) 00:19:47.779 6.587 - 6.613: 99.5616% ( 1) 00:19:47.779 11.520 - 11.573: 99.5665% ( 1) 00:19:47.779 12.373 - 12.427: 99.5714% ( 1) 00:19:47.779 114.347 - 115.200: 99.5764% ( 1) 00:19:47.779 3986.773 - 4014.080: 100.0000% ( 86) 00:19:47.779 00:19:47.779 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:47.779 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:47.779 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:47.779 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:47.779 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:48.041 [ 00:19:48.041 { 00:19:48.041 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:48.041 "subtype": "Discovery", 00:19:48.041 "listen_addresses": [], 00:19:48.041 "allow_any_host": true, 00:19:48.041 "hosts": [] 00:19:48.041 }, 00:19:48.041 { 00:19:48.041 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:48.041 "subtype": "NVMe", 00:19:48.041 "listen_addresses": [ 00:19:48.041 { 00:19:48.041 "trtype": "VFIOUSER", 00:19:48.041 "adrfam": "IPv4", 00:19:48.041 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:48.041 "trsvcid": "0" 00:19:48.041 } 00:19:48.041 ], 00:19:48.041 "allow_any_host": true, 00:19:48.041 "hosts": [], 00:19:48.041 "serial_number": "SPDK1", 00:19:48.041 "model_number": "SPDK bdev Controller", 00:19:48.041 "max_namespaces": 32, 00:19:48.041 "min_cntlid": 1, 00:19:48.041 "max_cntlid": 65519, 00:19:48.041 "namespaces": [ 00:19:48.041 { 00:19:48.041 "nsid": 1, 00:19:48.041 "bdev_name": "Malloc1", 00:19:48.041 "name": "Malloc1", 00:19:48.041 "nguid": "CD8BA188EE6E472C94824EF81A4A118B", 00:19:48.041 "uuid": "cd8ba188-ee6e-472c-9482-4ef81a4a118b" 00:19:48.041 } 00:19:48.041 ] 00:19:48.041 }, 00:19:48.041 { 00:19:48.041 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:48.041 "subtype": "NVMe", 00:19:48.041 "listen_addresses": [ 00:19:48.041 { 00:19:48.041 "trtype": "VFIOUSER", 00:19:48.041 "adrfam": "IPv4", 00:19:48.041 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:48.041 "trsvcid": "0" 00:19:48.041 } 00:19:48.041 ], 00:19:48.041 "allow_any_host": true, 00:19:48.041 "hosts": [], 00:19:48.041 "serial_number": "SPDK2", 00:19:48.041 "model_number": "SPDK bdev Controller", 00:19:48.041 "max_namespaces": 32, 00:19:48.041 "min_cntlid": 1, 00:19:48.041 "max_cntlid": 65519, 00:19:48.041 "namespaces": [ 00:19:48.041 { 00:19:48.041 "nsid": 1, 00:19:48.041 "bdev_name": "Malloc2", 00:19:48.041 "name": "Malloc2", 00:19:48.041 "nguid": "FD1C1DAB166E4AD8833B646A0DDA7DB8", 00:19:48.041 "uuid": "fd1c1dab-166e-4ad8-833b-646a0dda7db8" 00:19:48.041 } 00:19:48.041 ] 00:19:48.041 } 00:19:48.041 ] 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2669380 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:48.042 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:48.042 [2024-11-20 06:31:07.935127] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:48.303 Malloc3 00:19:48.303 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:48.303 [2024-11-20 06:31:08.137523] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:48.303 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:48.303 Asynchronous Event Request test 00:19:48.303 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:48.303 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:48.303 Registering asynchronous event callbacks... 00:19:48.303 Starting namespace attribute notice tests for all controllers... 00:19:48.303 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:48.303 aer_cb - Changed Namespace 00:19:48.303 Cleaning up... 00:19:48.566 [ 00:19:48.566 { 00:19:48.566 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:48.566 "subtype": "Discovery", 00:19:48.566 "listen_addresses": [], 00:19:48.566 "allow_any_host": true, 00:19:48.566 "hosts": [] 00:19:48.566 }, 00:19:48.566 { 00:19:48.566 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:48.566 "subtype": "NVMe", 00:19:48.566 "listen_addresses": [ 00:19:48.566 { 00:19:48.566 "trtype": "VFIOUSER", 00:19:48.566 "adrfam": "IPv4", 00:19:48.566 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:48.566 "trsvcid": "0" 00:19:48.566 } 00:19:48.566 ], 00:19:48.566 "allow_any_host": true, 00:19:48.566 "hosts": [], 00:19:48.566 "serial_number": "SPDK1", 00:19:48.566 "model_number": "SPDK bdev Controller", 00:19:48.566 "max_namespaces": 32, 00:19:48.566 "min_cntlid": 1, 00:19:48.566 "max_cntlid": 65519, 00:19:48.566 "namespaces": [ 00:19:48.566 { 00:19:48.566 "nsid": 1, 00:19:48.566 "bdev_name": "Malloc1", 00:19:48.566 "name": "Malloc1", 00:19:48.566 "nguid": "CD8BA188EE6E472C94824EF81A4A118B", 00:19:48.566 "uuid": "cd8ba188-ee6e-472c-9482-4ef81a4a118b" 00:19:48.566 }, 00:19:48.566 { 00:19:48.566 "nsid": 2, 00:19:48.566 "bdev_name": "Malloc3", 00:19:48.566 "name": "Malloc3", 00:19:48.566 "nguid": "11FC31FABCF9430B964426AC19763F11", 00:19:48.566 "uuid": "11fc31fa-bcf9-430b-9644-26ac19763f11" 00:19:48.566 } 00:19:48.566 ] 00:19:48.566 }, 00:19:48.566 { 00:19:48.566 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:48.566 "subtype": "NVMe", 00:19:48.566 "listen_addresses": [ 00:19:48.566 { 00:19:48.566 "trtype": "VFIOUSER", 00:19:48.566 "adrfam": "IPv4", 00:19:48.566 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:48.566 "trsvcid": "0" 00:19:48.566 } 00:19:48.566 ], 00:19:48.566 "allow_any_host": true, 00:19:48.566 "hosts": [], 00:19:48.566 "serial_number": "SPDK2", 00:19:48.566 "model_number": "SPDK bdev Controller", 00:19:48.566 "max_namespaces": 32, 00:19:48.566 "min_cntlid": 1, 00:19:48.566 "max_cntlid": 65519, 00:19:48.566 "namespaces": [ 00:19:48.566 { 00:19:48.566 "nsid": 1, 00:19:48.566 "bdev_name": "Malloc2", 00:19:48.566 "name": "Malloc2", 00:19:48.566 "nguid": "FD1C1DAB166E4AD8833B646A0DDA7DB8", 00:19:48.566 "uuid": "fd1c1dab-166e-4ad8-833b-646a0dda7db8" 00:19:48.566 } 00:19:48.566 ] 00:19:48.566 } 00:19:48.566 ] 00:19:48.566 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2669380 00:19:48.566 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:48.566 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:48.566 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:48.566 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:48.566 [2024-11-20 06:31:08.374457] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:19:48.566 [2024-11-20 06:31:08.374500] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669541 ] 00:19:48.566 [2024-11-20 06:31:08.415964] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:48.566 [2024-11-20 06:31:08.422948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:48.566 [2024-11-20 06:31:08.422967] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f036c6fa000 00:19:48.566 [2024-11-20 06:31:08.423949] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.424959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.425966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.426978] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.427986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.428994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.429996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.431000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:48.566 [2024-11-20 06:31:08.432005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:48.566 [2024-11-20 06:31:08.432013] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f036c6ef000 00:19:48.566 [2024-11-20 06:31:08.432924] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:48.566 [2024-11-20 06:31:08.445025] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:48.566 [2024-11-20 06:31:08.445044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:48.566 [2024-11-20 06:31:08.450107] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:48.566 [2024-11-20 06:31:08.450139] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:48.566 [2024-11-20 06:31:08.450198] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:48.566 [2024-11-20 06:31:08.450209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:48.566 [2024-11-20 06:31:08.450212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:48.566 [2024-11-20 06:31:08.451115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:48.566 [2024-11-20 06:31:08.451122] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:48.566 [2024-11-20 06:31:08.451127] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:48.566 [2024-11-20 06:31:08.452124] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:48.566 [2024-11-20 06:31:08.452131] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:48.566 [2024-11-20 06:31:08.452136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:48.566 [2024-11-20 06:31:08.453131] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:48.566 [2024-11-20 06:31:08.453140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:48.566 [2024-11-20 06:31:08.454135] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:48.566 [2024-11-20 06:31:08.454141] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:48.566 [2024-11-20 06:31:08.454145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:48.566 [2024-11-20 06:31:08.454150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:48.566 [2024-11-20 06:31:08.454256] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:48.566 [2024-11-20 06:31:08.454259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:48.566 [2024-11-20 06:31:08.454263] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:48.566 [2024-11-20 06:31:08.455138] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:48.566 [2024-11-20 06:31:08.456142] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:48.566 [2024-11-20 06:31:08.457153] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:48.566 [2024-11-20 06:31:08.458154] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:48.566 [2024-11-20 06:31:08.458186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:48.566 [2024-11-20 06:31:08.459165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:48.566 [2024-11-20 06:31:08.459172] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:48.566 [2024-11-20 06:31:08.459176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:48.566 [2024-11-20 06:31:08.459191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:48.566 [2024-11-20 06:31:08.459199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:48.567 [2024-11-20 06:31:08.459208] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:48.567 [2024-11-20 06:31:08.459212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:48.567 [2024-11-20 06:31:08.459215] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:48.567 [2024-11-20 06:31:08.459225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:48.567 [2024-11-20 06:31:08.466750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:48.567 [2024-11-20 06:31:08.466759] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:48.567 [2024-11-20 06:31:08.466763] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:48.567 [2024-11-20 06:31:08.466768] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:48.567 [2024-11-20 06:31:08.466771] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:48.567 [2024-11-20 06:31:08.466776] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:48.567 [2024-11-20 06:31:08.466779] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:48.567 [2024-11-20 06:31:08.466783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:48.567 [2024-11-20 06:31:08.466789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:48.567 [2024-11-20 06:31:08.466797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:48.567 [2024-11-20 06:31:08.474749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:48.567 [2024-11-20 06:31:08.474758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.567 [2024-11-20 06:31:08.474765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.567 [2024-11-20 06:31:08.474771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.567 [2024-11-20 06:31:08.474777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.567 [2024-11-20 06:31:08.474780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:48.567 [2024-11-20 06:31:08.474785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:48.567 [2024-11-20 06:31:08.474792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:48.829 [2024-11-20 06:31:08.482750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:48.829 [2024-11-20 06:31:08.482759] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:48.829 [2024-11-20 06:31:08.482762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:48.829 [2024-11-20 06:31:08.482768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:48.829 [2024-11-20 06:31:08.482772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:48.829 [2024-11-20 06:31:08.482778] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:48.829 [2024-11-20 06:31:08.490749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:48.829 [2024-11-20 06:31:08.490796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:48.829 [2024-11-20 06:31:08.490802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:48.829 [2024-11-20 06:31:08.490810] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:48.829 [2024-11-20 06:31:08.490813] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:48.829 [2024-11-20 06:31:08.490815] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:48.829 [2024-11-20 06:31:08.490820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:48.829 [2024-11-20 06:31:08.498750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:48.829 [2024-11-20 06:31:08.498758] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:48.829 [2024-11-20 06:31:08.498766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:48.829 [2024-11-20 06:31:08.498772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:48.829 [2024-11-20 06:31:08.498777] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:48.829 [2024-11-20 06:31:08.498780] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:48.830 [2024-11-20 06:31:08.498782] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:48.830 [2024-11-20 06:31:08.498786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.506748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.506759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.506765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.506770] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:48.830 [2024-11-20 06:31:08.506773] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:48.830 [2024-11-20 06:31:08.506775] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:48.830 [2024-11-20 06:31:08.506780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.514749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.514756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.514761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.514766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.514771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.514774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.514778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.514783] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:48.830 [2024-11-20 06:31:08.514786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:48.830 [2024-11-20 06:31:08.514790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:48.830 [2024-11-20 06:31:08.514803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.522748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.522758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.530751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.530760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.538750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.538760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.546748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.546760] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:48.830 [2024-11-20 06:31:08.546763] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:48.830 [2024-11-20 06:31:08.546766] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:48.830 [2024-11-20 06:31:08.546768] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:48.830 [2024-11-20 06:31:08.546770] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:48.830 [2024-11-20 06:31:08.546775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:48.830 [2024-11-20 06:31:08.546781] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:48.830 [2024-11-20 06:31:08.546784] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:48.830 [2024-11-20 06:31:08.546786] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:48.830 [2024-11-20 06:31:08.546790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.546796] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:48.830 [2024-11-20 06:31:08.546798] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:48.830 [2024-11-20 06:31:08.546801] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:48.830 [2024-11-20 06:31:08.546805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.546811] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:48.830 [2024-11-20 06:31:08.546814] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:48.830 [2024-11-20 06:31:08.546816] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:48.830 [2024-11-20 06:31:08.546820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:48.830 [2024-11-20 06:31:08.554748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.554759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.554766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:48.830 [2024-11-20 06:31:08.554771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:48.830 ===================================================== 00:19:48.830 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:48.830 ===================================================== 00:19:48.830 Controller Capabilities/Features 00:19:48.830 ================================ 00:19:48.830 Vendor ID: 4e58 00:19:48.830 Subsystem Vendor ID: 4e58 00:19:48.830 Serial Number: SPDK2 00:19:48.830 Model Number: SPDK bdev Controller 00:19:48.830 Firmware Version: 25.01 00:19:48.830 Recommended Arb Burst: 6 00:19:48.830 IEEE OUI Identifier: 8d 6b 50 00:19:48.830 Multi-path I/O 00:19:48.830 May have multiple subsystem ports: Yes 00:19:48.830 May have multiple controllers: Yes 00:19:48.830 Associated with SR-IOV VF: No 00:19:48.830 Max Data Transfer Size: 131072 00:19:48.830 Max Number of Namespaces: 32 00:19:48.830 Max Number of I/O Queues: 127 00:19:48.830 NVMe Specification Version (VS): 1.3 00:19:48.830 NVMe Specification Version (Identify): 1.3 00:19:48.830 Maximum Queue Entries: 256 00:19:48.830 Contiguous Queues Required: Yes 00:19:48.830 Arbitration Mechanisms Supported 00:19:48.830 Weighted Round Robin: Not Supported 00:19:48.830 Vendor Specific: Not Supported 00:19:48.830 Reset Timeout: 15000 ms 00:19:48.830 Doorbell Stride: 4 bytes 00:19:48.830 NVM Subsystem Reset: Not Supported 00:19:48.830 Command Sets Supported 00:19:48.830 NVM Command Set: Supported 00:19:48.830 Boot Partition: Not Supported 00:19:48.830 Memory Page Size Minimum: 4096 bytes 00:19:48.830 Memory Page Size Maximum: 4096 bytes 00:19:48.830 Persistent Memory Region: Not Supported 00:19:48.830 Optional Asynchronous Events Supported 00:19:48.830 Namespace Attribute Notices: Supported 00:19:48.830 Firmware Activation Notices: Not Supported 00:19:48.830 ANA Change Notices: Not Supported 00:19:48.830 PLE Aggregate Log Change Notices: Not Supported 00:19:48.830 LBA Status Info Alert Notices: Not Supported 00:19:48.830 EGE Aggregate Log Change Notices: Not Supported 00:19:48.830 Normal NVM Subsystem Shutdown event: Not Supported 00:19:48.830 Zone Descriptor Change Notices: Not Supported 00:19:48.830 Discovery Log Change Notices: Not Supported 00:19:48.830 Controller Attributes 00:19:48.830 128-bit Host Identifier: Supported 00:19:48.830 Non-Operational Permissive Mode: Not Supported 00:19:48.830 NVM Sets: Not Supported 00:19:48.830 Read Recovery Levels: Not Supported 00:19:48.830 Endurance Groups: Not Supported 00:19:48.830 Predictable Latency Mode: Not Supported 00:19:48.830 Traffic Based Keep ALive: Not Supported 00:19:48.830 Namespace Granularity: Not Supported 00:19:48.830 SQ Associations: Not Supported 00:19:48.830 UUID List: Not Supported 00:19:48.830 Multi-Domain Subsystem: Not Supported 00:19:48.830 Fixed Capacity Management: Not Supported 00:19:48.830 Variable Capacity Management: Not Supported 00:19:48.830 Delete Endurance Group: Not Supported 00:19:48.830 Delete NVM Set: Not Supported 00:19:48.830 Extended LBA Formats Supported: Not Supported 00:19:48.830 Flexible Data Placement Supported: Not Supported 00:19:48.830 00:19:48.830 Controller Memory Buffer Support 00:19:48.830 ================================ 00:19:48.830 Supported: No 00:19:48.830 00:19:48.830 Persistent Memory Region Support 00:19:48.830 ================================ 00:19:48.830 Supported: No 00:19:48.830 00:19:48.830 Admin Command Set Attributes 00:19:48.830 ============================ 00:19:48.830 Security Send/Receive: Not Supported 00:19:48.830 Format NVM: Not Supported 00:19:48.831 Firmware Activate/Download: Not Supported 00:19:48.831 Namespace Management: Not Supported 00:19:48.831 Device Self-Test: Not Supported 00:19:48.831 Directives: Not Supported 00:19:48.831 NVMe-MI: Not Supported 00:19:48.831 Virtualization Management: Not Supported 00:19:48.831 Doorbell Buffer Config: Not Supported 00:19:48.831 Get LBA Status Capability: Not Supported 00:19:48.831 Command & Feature Lockdown Capability: Not Supported 00:19:48.831 Abort Command Limit: 4 00:19:48.831 Async Event Request Limit: 4 00:19:48.831 Number of Firmware Slots: N/A 00:19:48.831 Firmware Slot 1 Read-Only: N/A 00:19:48.831 Firmware Activation Without Reset: N/A 00:19:48.831 Multiple Update Detection Support: N/A 00:19:48.831 Firmware Update Granularity: No Information Provided 00:19:48.831 Per-Namespace SMART Log: No 00:19:48.831 Asymmetric Namespace Access Log Page: Not Supported 00:19:48.831 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:48.831 Command Effects Log Page: Supported 00:19:48.831 Get Log Page Extended Data: Supported 00:19:48.831 Telemetry Log Pages: Not Supported 00:19:48.831 Persistent Event Log Pages: Not Supported 00:19:48.831 Supported Log Pages Log Page: May Support 00:19:48.831 Commands Supported & Effects Log Page: Not Supported 00:19:48.831 Feature Identifiers & Effects Log Page:May Support 00:19:48.831 NVMe-MI Commands & Effects Log Page: May Support 00:19:48.831 Data Area 4 for Telemetry Log: Not Supported 00:19:48.831 Error Log Page Entries Supported: 128 00:19:48.831 Keep Alive: Supported 00:19:48.831 Keep Alive Granularity: 10000 ms 00:19:48.831 00:19:48.831 NVM Command Set Attributes 00:19:48.831 ========================== 00:19:48.831 Submission Queue Entry Size 00:19:48.831 Max: 64 00:19:48.831 Min: 64 00:19:48.831 Completion Queue Entry Size 00:19:48.831 Max: 16 00:19:48.831 Min: 16 00:19:48.831 Number of Namespaces: 32 00:19:48.831 Compare Command: Supported 00:19:48.831 Write Uncorrectable Command: Not Supported 00:19:48.831 Dataset Management Command: Supported 00:19:48.831 Write Zeroes Command: Supported 00:19:48.831 Set Features Save Field: Not Supported 00:19:48.831 Reservations: Not Supported 00:19:48.831 Timestamp: Not Supported 00:19:48.831 Copy: Supported 00:19:48.831 Volatile Write Cache: Present 00:19:48.831 Atomic Write Unit (Normal): 1 00:19:48.831 Atomic Write Unit (PFail): 1 00:19:48.831 Atomic Compare & Write Unit: 1 00:19:48.831 Fused Compare & Write: Supported 00:19:48.831 Scatter-Gather List 00:19:48.831 SGL Command Set: Supported (Dword aligned) 00:19:48.831 SGL Keyed: Not Supported 00:19:48.831 SGL Bit Bucket Descriptor: Not Supported 00:19:48.831 SGL Metadata Pointer: Not Supported 00:19:48.831 Oversized SGL: Not Supported 00:19:48.831 SGL Metadata Address: Not Supported 00:19:48.831 SGL Offset: Not Supported 00:19:48.831 Transport SGL Data Block: Not Supported 00:19:48.831 Replay Protected Memory Block: Not Supported 00:19:48.831 00:19:48.831 Firmware Slot Information 00:19:48.831 ========================= 00:19:48.831 Active slot: 1 00:19:48.831 Slot 1 Firmware Revision: 25.01 00:19:48.831 00:19:48.831 00:19:48.831 Commands Supported and Effects 00:19:48.831 ============================== 00:19:48.831 Admin Commands 00:19:48.831 -------------- 00:19:48.831 Get Log Page (02h): Supported 00:19:48.831 Identify (06h): Supported 00:19:48.831 Abort (08h): Supported 00:19:48.831 Set Features (09h): Supported 00:19:48.831 Get Features (0Ah): Supported 00:19:48.831 Asynchronous Event Request (0Ch): Supported 00:19:48.831 Keep Alive (18h): Supported 00:19:48.831 I/O Commands 00:19:48.831 ------------ 00:19:48.831 Flush (00h): Supported LBA-Change 00:19:48.831 Write (01h): Supported LBA-Change 00:19:48.831 Read (02h): Supported 00:19:48.831 Compare (05h): Supported 00:19:48.831 Write Zeroes (08h): Supported LBA-Change 00:19:48.831 Dataset Management (09h): Supported LBA-Change 00:19:48.831 Copy (19h): Supported LBA-Change 00:19:48.831 00:19:48.831 Error Log 00:19:48.831 ========= 00:19:48.831 00:19:48.831 Arbitration 00:19:48.831 =========== 00:19:48.831 Arbitration Burst: 1 00:19:48.831 00:19:48.831 Power Management 00:19:48.831 ================ 00:19:48.831 Number of Power States: 1 00:19:48.831 Current Power State: Power State #0 00:19:48.831 Power State #0: 00:19:48.831 Max Power: 0.00 W 00:19:48.831 Non-Operational State: Operational 00:19:48.831 Entry Latency: Not Reported 00:19:48.831 Exit Latency: Not Reported 00:19:48.831 Relative Read Throughput: 0 00:19:48.831 Relative Read Latency: 0 00:19:48.831 Relative Write Throughput: 0 00:19:48.831 Relative Write Latency: 0 00:19:48.831 Idle Power: Not Reported 00:19:48.831 Active Power: Not Reported 00:19:48.831 Non-Operational Permissive Mode: Not Supported 00:19:48.831 00:19:48.831 Health Information 00:19:48.831 ================== 00:19:48.831 Critical Warnings: 00:19:48.831 Available Spare Space: OK 00:19:48.831 Temperature: OK 00:19:48.831 Device Reliability: OK 00:19:48.831 Read Only: No 00:19:48.831 Volatile Memory Backup: OK 00:19:48.831 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:48.831 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:48.831 Available Spare: 0% 00:19:48.831 Available Sp[2024-11-20 06:31:08.554844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:48.831 [2024-11-20 06:31:08.562750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:48.831 [2024-11-20 06:31:08.562774] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:48.831 [2024-11-20 06:31:08.562780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.831 [2024-11-20 06:31:08.562785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.831 [2024-11-20 06:31:08.562789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.831 [2024-11-20 06:31:08.562794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.831 [2024-11-20 06:31:08.562824] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:48.831 [2024-11-20 06:31:08.562832] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:48.831 [2024-11-20 06:31:08.563833] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:48.831 [2024-11-20 06:31:08.563869] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:48.831 [2024-11-20 06:31:08.563874] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:48.831 [2024-11-20 06:31:08.564836] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:48.831 [2024-11-20 06:31:08.564845] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:48.831 [2024-11-20 06:31:08.564888] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:48.831 [2024-11-20 06:31:08.565858] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:48.831 are Threshold: 0% 00:19:48.831 Life Percentage Used: 0% 00:19:48.831 Data Units Read: 0 00:19:48.831 Data Units Written: 0 00:19:48.831 Host Read Commands: 0 00:19:48.831 Host Write Commands: 0 00:19:48.831 Controller Busy Time: 0 minutes 00:19:48.831 Power Cycles: 0 00:19:48.831 Power On Hours: 0 hours 00:19:48.831 Unsafe Shutdowns: 0 00:19:48.831 Unrecoverable Media Errors: 0 00:19:48.831 Lifetime Error Log Entries: 0 00:19:48.831 Warning Temperature Time: 0 minutes 00:19:48.831 Critical Temperature Time: 0 minutes 00:19:48.831 00:19:48.831 Number of Queues 00:19:48.831 ================ 00:19:48.831 Number of I/O Submission Queues: 127 00:19:48.831 Number of I/O Completion Queues: 127 00:19:48.831 00:19:48.831 Active Namespaces 00:19:48.831 ================= 00:19:48.831 Namespace ID:1 00:19:48.831 Error Recovery Timeout: Unlimited 00:19:48.831 Command Set Identifier: NVM (00h) 00:19:48.831 Deallocate: Supported 00:19:48.831 Deallocated/Unwritten Error: Not Supported 00:19:48.831 Deallocated Read Value: Unknown 00:19:48.831 Deallocate in Write Zeroes: Not Supported 00:19:48.831 Deallocated Guard Field: 0xFFFF 00:19:48.831 Flush: Supported 00:19:48.831 Reservation: Supported 00:19:48.831 Namespace Sharing Capabilities: Multiple Controllers 00:19:48.831 Size (in LBAs): 131072 (0GiB) 00:19:48.831 Capacity (in LBAs): 131072 (0GiB) 00:19:48.831 Utilization (in LBAs): 131072 (0GiB) 00:19:48.831 NGUID: FD1C1DAB166E4AD8833B646A0DDA7DB8 00:19:48.831 UUID: fd1c1dab-166e-4ad8-833b-646a0dda7db8 00:19:48.831 Thin Provisioning: Not Supported 00:19:48.832 Per-NS Atomic Units: Yes 00:19:48.832 Atomic Boundary Size (Normal): 0 00:19:48.832 Atomic Boundary Size (PFail): 0 00:19:48.832 Atomic Boundary Offset: 0 00:19:48.832 Maximum Single Source Range Length: 65535 00:19:48.832 Maximum Copy Length: 65535 00:19:48.832 Maximum Source Range Count: 1 00:19:48.832 NGUID/EUI64 Never Reused: No 00:19:48.832 Namespace Write Protected: No 00:19:48.832 Number of LBA Formats: 1 00:19:48.832 Current LBA Format: LBA Format #00 00:19:48.832 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:48.832 00:19:48.832 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:49.092 [2024-11-20 06:31:08.752814] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:54.389 Initializing NVMe Controllers 00:19:54.389 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:54.389 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:54.389 Initialization complete. Launching workers. 00:19:54.389 ======================================================== 00:19:54.389 Latency(us) 00:19:54.389 Device Information : IOPS MiB/s Average min max 00:19:54.389 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39984.40 156.19 3203.63 842.95 7780.55 00:19:54.389 ======================================================== 00:19:54.389 Total : 39984.40 156.19 3203.63 842.95 7780.55 00:19:54.389 00:19:54.389 [2024-11-20 06:31:13.860946] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:54.389 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:54.389 [2024-11-20 06:31:14.052575] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:59.685 Initializing NVMe Controllers 00:19:59.685 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:59.685 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:59.685 Initialization complete. Launching workers. 00:19:59.685 ======================================================== 00:19:59.685 Latency(us) 00:19:59.685 Device Information : IOPS MiB/s Average min max 00:19:59.685 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39951.53 156.06 3203.75 842.73 10775.67 00:19:59.685 ======================================================== 00:19:59.685 Total : 39951.53 156.06 3203.75 842.73 10775.67 00:19:59.685 00:19:59.685 [2024-11-20 06:31:19.075169] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:59.685 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:59.685 [2024-11-20 06:31:19.274141] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:04.975 [2024-11-20 06:31:24.409837] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:04.975 Initializing NVMe Controllers 00:20:04.975 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:04.975 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:04.975 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:04.975 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:04.975 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:04.975 Initialization complete. Launching workers. 00:20:04.975 Starting thread on core 2 00:20:04.975 Starting thread on core 3 00:20:04.975 Starting thread on core 1 00:20:04.975 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:04.975 [2024-11-20 06:31:24.661197] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:08.417 [2024-11-20 06:31:27.723663] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:08.417 Initializing NVMe Controllers 00:20:08.417 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:08.417 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:08.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:08.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:08.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:08.417 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:08.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:08.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:08.417 Initialization complete. Launching workers. 00:20:08.417 Starting thread on core 1 with urgent priority queue 00:20:08.417 Starting thread on core 2 with urgent priority queue 00:20:08.417 Starting thread on core 3 with urgent priority queue 00:20:08.417 Starting thread on core 0 with urgent priority queue 00:20:08.417 SPDK bdev Controller (SPDK2 ) core 0: 13290.67 IO/s 7.52 secs/100000 ios 00:20:08.417 SPDK bdev Controller (SPDK2 ) core 1: 7823.33 IO/s 12.78 secs/100000 ios 00:20:08.417 SPDK bdev Controller (SPDK2 ) core 2: 13634.67 IO/s 7.33 secs/100000 ios 00:20:08.417 SPDK bdev Controller (SPDK2 ) core 3: 8916.00 IO/s 11.22 secs/100000 ios 00:20:08.417 ======================================================== 00:20:08.417 00:20:08.417 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:08.417 [2024-11-20 06:31:27.964111] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:08.417 Initializing NVMe Controllers 00:20:08.417 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:08.417 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:08.417 Namespace ID: 1 size: 0GB 00:20:08.417 Initialization complete. 00:20:08.417 INFO: using host memory buffer for IO 00:20:08.417 Hello world! 00:20:08.417 [2024-11-20 06:31:27.974176] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:08.417 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:08.417 [2024-11-20 06:31:28.215403] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:09.800 Initializing NVMe Controllers 00:20:09.800 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:09.800 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:09.800 Initialization complete. Launching workers. 00:20:09.800 submit (in ns) avg, min, max = 5142.0, 2835.0, 3999993.3 00:20:09.800 complete (in ns) avg, min, max = 16779.8, 1638.3, 3999220.0 00:20:09.800 00:20:09.800 Submit histogram 00:20:09.800 ================ 00:20:09.800 Range in us Cumulative Count 00:20:09.800 2.827 - 2.840: 0.0588% ( 12) 00:20:09.800 2.840 - 2.853: 0.8814% ( 168) 00:20:09.800 2.853 - 2.867: 2.9183% ( 416) 00:20:09.800 2.867 - 2.880: 6.3409% ( 699) 00:20:09.800 2.880 - 2.893: 10.9974% ( 951) 00:20:09.800 2.893 - 2.907: 16.4374% ( 1111) 00:20:09.800 2.907 - 2.920: 21.5003% ( 1034) 00:20:09.800 2.920 - 2.933: 26.5926% ( 1040) 00:20:09.800 2.933 - 2.947: 32.1304% ( 1131) 00:20:09.801 2.947 - 2.960: 37.8446% ( 1167) 00:20:09.801 2.960 - 2.973: 43.5832% ( 1172) 00:20:09.801 2.973 - 2.987: 48.6657% ( 1038) 00:20:09.801 2.987 - 3.000: 56.1230% ( 1523) 00:20:09.801 3.000 - 3.013: 65.3528% ( 1885) 00:20:09.801 3.013 - 3.027: 75.2387% ( 2019) 00:20:09.801 3.027 - 3.040: 82.9408% ( 1573) 00:20:09.801 3.040 - 3.053: 89.1103% ( 1260) 00:20:09.801 3.053 - 3.067: 93.5367% ( 904) 00:20:09.801 3.067 - 3.080: 96.5186% ( 609) 00:20:09.801 3.080 - 3.093: 98.1394% ( 331) 00:20:09.801 3.093 - 3.107: 98.9130% ( 158) 00:20:09.801 3.107 - 3.120: 99.2606% ( 71) 00:20:09.801 3.120 - 3.133: 99.4026% ( 29) 00:20:09.801 3.133 - 3.147: 99.4957% ( 19) 00:20:09.801 3.147 - 3.160: 99.5397% ( 9) 00:20:09.801 3.160 - 3.173: 99.5446% ( 1) 00:20:09.801 3.173 - 3.187: 99.5544% ( 2) 00:20:09.801 3.187 - 3.200: 99.5642% ( 2) 00:20:09.801 3.200 - 3.213: 99.5691% ( 1) 00:20:09.801 3.213 - 3.227: 99.5740% ( 1) 00:20:09.801 3.347 - 3.360: 99.5789% ( 1) 00:20:09.801 3.413 - 3.440: 99.5838% ( 1) 00:20:09.801 3.493 - 3.520: 99.5887% ( 1) 00:20:09.801 3.600 - 3.627: 99.5936% ( 1) 00:20:09.801 3.627 - 3.653: 99.5985% ( 1) 00:20:09.801 3.680 - 3.707: 99.6034% ( 1) 00:20:09.801 3.760 - 3.787: 99.6132% ( 2) 00:20:09.801 4.000 - 4.027: 99.6181% ( 1) 00:20:09.801 4.107 - 4.133: 99.6230% ( 1) 00:20:09.801 4.240 - 4.267: 99.6279% ( 1) 00:20:09.801 4.400 - 4.427: 99.6328% ( 1) 00:20:09.801 4.453 - 4.480: 99.6377% ( 1) 00:20:09.801 4.693 - 4.720: 99.6426% ( 1) 00:20:09.801 4.720 - 4.747: 99.6524% ( 2) 00:20:09.801 4.747 - 4.773: 99.6572% ( 1) 00:20:09.801 4.853 - 4.880: 99.6621% ( 1) 00:20:09.801 4.880 - 4.907: 99.6719% ( 2) 00:20:09.801 4.907 - 4.933: 99.6768% ( 1) 00:20:09.801 4.960 - 4.987: 99.6866% ( 2) 00:20:09.801 4.987 - 5.013: 99.6915% ( 1) 00:20:09.801 5.013 - 5.040: 99.7013% ( 2) 00:20:09.801 5.067 - 5.093: 99.7111% ( 2) 00:20:09.801 5.093 - 5.120: 99.7209% ( 2) 00:20:09.801 5.173 - 5.200: 99.7258% ( 1) 00:20:09.801 5.200 - 5.227: 99.7307% ( 1) 00:20:09.801 5.573 - 5.600: 99.7356% ( 1) 00:20:09.801 5.867 - 5.893: 99.7405% ( 1) 00:20:09.801 6.080 - 6.107: 99.7454% ( 1) 00:20:09.801 6.133 - 6.160: 99.7552% ( 2) 00:20:09.801 6.187 - 6.213: 99.7650% ( 2) 00:20:09.801 6.293 - 6.320: 99.7699% ( 1) 00:20:09.801 6.320 - 6.347: 99.7797% ( 2) 00:20:09.801 6.347 - 6.373: 99.7846% ( 1) 00:20:09.801 6.507 - 6.533: 99.7895% ( 1) 00:20:09.801 6.533 - 6.560: 99.7943% ( 1) 00:20:09.801 6.560 - 6.587: 99.7992% ( 1) 00:20:09.801 6.667 - 6.693: 99.8041% ( 1) 00:20:09.801 6.747 - 6.773: 99.8090% ( 1) 00:20:09.801 6.827 - 6.880: 99.8139% ( 1) 00:20:09.801 6.933 - 6.987: 99.8384% ( 5) 00:20:09.801 6.987 - 7.040: 99.8433% ( 1) 00:20:09.801 7.040 - 7.093: 99.8531% ( 2) 00:20:09.801 7.093 - 7.147: 99.8580% ( 1) 00:20:09.801 7.147 - 7.200: 99.8629% ( 1) 00:20:09.801 7.253 - 7.307: 99.8727% ( 2) 00:20:09.801 7.360 - 7.413: 99.8825% ( 2) 00:20:09.801 7.413 - 7.467: 99.8874% ( 1) 00:20:09.801 7.467 - 7.520: 99.8972% ( 2) 00:20:09.801 7.573 - 7.627: 99.9119% ( 3) 00:20:09.801 [2024-11-20 06:31:29.310278] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:09.801 7.627 - 7.680: 99.9168% ( 1) 00:20:09.801 7.893 - 7.947: 99.9266% ( 2) 00:20:09.801 7.947 - 8.000: 99.9314% ( 1) 00:20:09.801 8.213 - 8.267: 99.9363% ( 1) 00:20:09.801 8.853 - 8.907: 99.9412% ( 1) 00:20:09.801 13.387 - 13.440: 99.9461% ( 1) 00:20:09.801 3986.773 - 4014.080: 100.0000% ( 11) 00:20:09.801 00:20:09.801 Complete histogram 00:20:09.801 ================== 00:20:09.801 Range in us Cumulative Count 00:20:09.801 1.633 - 1.640: 0.0098% ( 2) 00:20:09.801 1.640 - 1.647: 0.6708% ( 135) 00:20:09.801 1.647 - 1.653: 0.8226% ( 31) 00:20:09.801 1.653 - 1.660: 0.9205% ( 20) 00:20:09.801 1.660 - 1.667: 1.0380% ( 24) 00:20:09.801 1.667 - 1.673: 1.1311% ( 19) 00:20:09.801 1.673 - 1.680: 1.1654% ( 7) 00:20:09.801 1.680 - 1.687: 1.2878% ( 25) 00:20:09.801 1.687 - 1.693: 43.2258% ( 8565) 00:20:09.801 1.693 - 1.700: 51.7896% ( 1749) 00:20:09.801 1.700 - 1.707: 57.5919% ( 1185) 00:20:09.801 1.707 - 1.720: 75.2632% ( 3609) 00:20:09.801 1.720 - 1.733: 82.6225% ( 1503) 00:20:09.801 1.733 - 1.747: 83.9446% ( 270) 00:20:09.801 1.747 - 1.760: 87.6610% ( 759) 00:20:09.801 1.760 - 1.773: 93.3457% ( 1161) 00:20:09.801 1.773 - 1.787: 96.9446% ( 735) 00:20:09.801 1.787 - 1.800: 98.7857% ( 376) 00:20:09.801 1.800 - 1.813: 99.2949% ( 104) 00:20:09.801 1.813 - 1.827: 99.4173% ( 25) 00:20:09.801 1.827 - 1.840: 99.4369% ( 4) 00:20:09.801 1.840 - 1.853: 99.4418% ( 1) 00:20:09.801 1.933 - 1.947: 99.4467% ( 1) 00:20:09.801 3.280 - 3.293: 99.4516% ( 1) 00:20:09.801 3.440 - 3.467: 99.4565% ( 1) 00:20:09.801 3.547 - 3.573: 99.4614% ( 1) 00:20:09.801 4.160 - 4.187: 99.4663% ( 1) 00:20:09.801 4.640 - 4.667: 99.4761% ( 2) 00:20:09.801 4.747 - 4.773: 99.4810% ( 1) 00:20:09.801 4.773 - 4.800: 99.4859% ( 1) 00:20:09.801 4.853 - 4.880: 99.4908% ( 1) 00:20:09.801 5.013 - 5.040: 99.5006% ( 2) 00:20:09.801 5.280 - 5.307: 99.5055% ( 1) 00:20:09.801 5.360 - 5.387: 99.5104% ( 1) 00:20:09.801 5.413 - 5.440: 99.5153% ( 1) 00:20:09.801 5.440 - 5.467: 99.5201% ( 1) 00:20:09.801 5.493 - 5.520: 99.5250% ( 1) 00:20:09.801 5.520 - 5.547: 99.5299% ( 1) 00:20:09.801 5.627 - 5.653: 99.5348% ( 1) 00:20:09.801 5.680 - 5.707: 99.5397% ( 1) 00:20:09.801 5.733 - 5.760: 99.5446% ( 1) 00:20:09.801 5.787 - 5.813: 99.5495% ( 1) 00:20:09.801 6.053 - 6.080: 99.5544% ( 1) 00:20:09.801 6.080 - 6.107: 99.5593% ( 1) 00:20:09.801 6.187 - 6.213: 99.5642% ( 1) 00:20:09.801 6.427 - 6.453: 99.5691% ( 1) 00:20:09.801 6.533 - 6.560: 99.5740% ( 1) 00:20:09.801 6.613 - 6.640: 99.5789% ( 1) 00:20:09.801 6.933 - 6.987: 99.5887% ( 2) 00:20:09.801 7.040 - 7.093: 99.5936% ( 1) 00:20:09.801 7.200 - 7.253: 99.5985% ( 1) 00:20:09.801 7.253 - 7.307: 99.6034% ( 1) 00:20:09.801 7.573 - 7.627: 99.6083% ( 1) 00:20:09.801 9.227 - 9.280: 99.6132% ( 1) 00:20:09.801 10.827 - 10.880: 99.6181% ( 1) 00:20:09.801 11.627 - 11.680: 99.6230% ( 1) 00:20:09.801 3986.773 - 4014.080: 100.0000% ( 77) 00:20:09.801 00:20:09.801 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:09.801 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:09.801 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:09.801 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:09.801 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:09.801 [ 00:20:09.801 { 00:20:09.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:09.801 "subtype": "Discovery", 00:20:09.801 "listen_addresses": [], 00:20:09.801 "allow_any_host": true, 00:20:09.801 "hosts": [] 00:20:09.801 }, 00:20:09.801 { 00:20:09.801 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:09.801 "subtype": "NVMe", 00:20:09.801 "listen_addresses": [ 00:20:09.801 { 00:20:09.801 "trtype": "VFIOUSER", 00:20:09.801 "adrfam": "IPv4", 00:20:09.801 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:09.801 "trsvcid": "0" 00:20:09.801 } 00:20:09.801 ], 00:20:09.801 "allow_any_host": true, 00:20:09.801 "hosts": [], 00:20:09.801 "serial_number": "SPDK1", 00:20:09.801 "model_number": "SPDK bdev Controller", 00:20:09.801 "max_namespaces": 32, 00:20:09.801 "min_cntlid": 1, 00:20:09.801 "max_cntlid": 65519, 00:20:09.801 "namespaces": [ 00:20:09.801 { 00:20:09.801 "nsid": 1, 00:20:09.801 "bdev_name": "Malloc1", 00:20:09.801 "name": "Malloc1", 00:20:09.801 "nguid": "CD8BA188EE6E472C94824EF81A4A118B", 00:20:09.801 "uuid": "cd8ba188-ee6e-472c-9482-4ef81a4a118b" 00:20:09.801 }, 00:20:09.801 { 00:20:09.801 "nsid": 2, 00:20:09.801 "bdev_name": "Malloc3", 00:20:09.801 "name": "Malloc3", 00:20:09.801 "nguid": "11FC31FABCF9430B964426AC19763F11", 00:20:09.801 "uuid": "11fc31fa-bcf9-430b-9644-26ac19763f11" 00:20:09.801 } 00:20:09.801 ] 00:20:09.801 }, 00:20:09.801 { 00:20:09.802 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:09.802 "subtype": "NVMe", 00:20:09.802 "listen_addresses": [ 00:20:09.802 { 00:20:09.802 "trtype": "VFIOUSER", 00:20:09.802 "adrfam": "IPv4", 00:20:09.802 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:09.802 "trsvcid": "0" 00:20:09.802 } 00:20:09.802 ], 00:20:09.802 "allow_any_host": true, 00:20:09.802 "hosts": [], 00:20:09.802 "serial_number": "SPDK2", 00:20:09.802 "model_number": "SPDK bdev Controller", 00:20:09.802 "max_namespaces": 32, 00:20:09.802 "min_cntlid": 1, 00:20:09.802 "max_cntlid": 65519, 00:20:09.802 "namespaces": [ 00:20:09.802 { 00:20:09.802 "nsid": 1, 00:20:09.802 "bdev_name": "Malloc2", 00:20:09.802 "name": "Malloc2", 00:20:09.802 "nguid": "FD1C1DAB166E4AD8833B646A0DDA7DB8", 00:20:09.802 "uuid": "fd1c1dab-166e-4ad8-833b-646a0dda7db8" 00:20:09.802 } 00:20:09.802 ] 00:20:09.802 } 00:20:09.802 ] 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2673575 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:09.802 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:09.802 [2024-11-20 06:31:29.681094] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:09.802 Malloc4 00:20:10.063 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:10.063 [2024-11-20 06:31:29.883409] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:10.063 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:10.063 Asynchronous Event Request test 00:20:10.063 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:10.063 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:10.063 Registering asynchronous event callbacks... 00:20:10.063 Starting namespace attribute notice tests for all controllers... 00:20:10.063 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:10.063 aer_cb - Changed Namespace 00:20:10.063 Cleaning up... 00:20:10.328 [ 00:20:10.328 { 00:20:10.328 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:10.328 "subtype": "Discovery", 00:20:10.328 "listen_addresses": [], 00:20:10.328 "allow_any_host": true, 00:20:10.328 "hosts": [] 00:20:10.328 }, 00:20:10.328 { 00:20:10.328 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:10.328 "subtype": "NVMe", 00:20:10.328 "listen_addresses": [ 00:20:10.328 { 00:20:10.328 "trtype": "VFIOUSER", 00:20:10.328 "adrfam": "IPv4", 00:20:10.328 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:10.328 "trsvcid": "0" 00:20:10.328 } 00:20:10.328 ], 00:20:10.328 "allow_any_host": true, 00:20:10.328 "hosts": [], 00:20:10.328 "serial_number": "SPDK1", 00:20:10.328 "model_number": "SPDK bdev Controller", 00:20:10.328 "max_namespaces": 32, 00:20:10.328 "min_cntlid": 1, 00:20:10.328 "max_cntlid": 65519, 00:20:10.328 "namespaces": [ 00:20:10.328 { 00:20:10.328 "nsid": 1, 00:20:10.328 "bdev_name": "Malloc1", 00:20:10.328 "name": "Malloc1", 00:20:10.328 "nguid": "CD8BA188EE6E472C94824EF81A4A118B", 00:20:10.328 "uuid": "cd8ba188-ee6e-472c-9482-4ef81a4a118b" 00:20:10.328 }, 00:20:10.328 { 00:20:10.328 "nsid": 2, 00:20:10.328 "bdev_name": "Malloc3", 00:20:10.328 "name": "Malloc3", 00:20:10.328 "nguid": "11FC31FABCF9430B964426AC19763F11", 00:20:10.328 "uuid": "11fc31fa-bcf9-430b-9644-26ac19763f11" 00:20:10.328 } 00:20:10.328 ] 00:20:10.328 }, 00:20:10.328 { 00:20:10.328 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:10.328 "subtype": "NVMe", 00:20:10.328 "listen_addresses": [ 00:20:10.328 { 00:20:10.328 "trtype": "VFIOUSER", 00:20:10.328 "adrfam": "IPv4", 00:20:10.328 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:10.328 "trsvcid": "0" 00:20:10.328 } 00:20:10.328 ], 00:20:10.328 "allow_any_host": true, 00:20:10.328 "hosts": [], 00:20:10.328 "serial_number": "SPDK2", 00:20:10.328 "model_number": "SPDK bdev Controller", 00:20:10.328 "max_namespaces": 32, 00:20:10.328 "min_cntlid": 1, 00:20:10.328 "max_cntlid": 65519, 00:20:10.328 "namespaces": [ 00:20:10.328 { 00:20:10.328 "nsid": 1, 00:20:10.328 "bdev_name": "Malloc2", 00:20:10.328 "name": "Malloc2", 00:20:10.328 "nguid": "FD1C1DAB166E4AD8833B646A0DDA7DB8", 00:20:10.328 "uuid": "fd1c1dab-166e-4ad8-833b-646a0dda7db8" 00:20:10.328 }, 00:20:10.328 { 00:20:10.328 "nsid": 2, 00:20:10.328 "bdev_name": "Malloc4", 00:20:10.328 "name": "Malloc4", 00:20:10.328 "nguid": "7DED466677E04E1591952CBBBDF266D2", 00:20:10.328 "uuid": "7ded4666-77e0-4e15-9195-2cbbbdf266d2" 00:20:10.328 } 00:20:10.328 ] 00:20:10.328 } 00:20:10.328 ] 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2673575 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2664546 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2664546 ']' 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2664546 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2664546 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2664546' 00:20:10.328 killing process with pid 2664546 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2664546 00:20:10.328 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2664546 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2673772 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2673772' 00:20:10.591 Process pid: 2673772 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2673772 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2673772 ']' 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.591 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:10.591 [2024-11-20 06:31:30.360474] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:10.591 [2024-11-20 06:31:30.361418] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:20:10.591 [2024-11-20 06:31:30.361464] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.591 [2024-11-20 06:31:30.447029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.591 [2024-11-20 06:31:30.482088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.591 [2024-11-20 06:31:30.482123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.591 [2024-11-20 06:31:30.482129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.591 [2024-11-20 06:31:30.482134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.591 [2024-11-20 06:31:30.482138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.591 [2024-11-20 06:31:30.483490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.591 [2024-11-20 06:31:30.483642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.591 [2024-11-20 06:31:30.483795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.591 [2024-11-20 06:31:30.483812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.852 [2024-11-20 06:31:30.538049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:10.852 [2024-11-20 06:31:30.538951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:10.852 [2024-11-20 06:31:30.539854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:10.852 [2024-11-20 06:31:30.540495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:10.852 [2024-11-20 06:31:30.540523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:11.424 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:11.424 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:20:11.424 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:12.367 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:12.629 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:12.629 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:12.629 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:12.629 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:12.629 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:12.890 Malloc1 00:20:12.890 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:12.890 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:13.151 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:13.412 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:13.412 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:13.412 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:13.412 Malloc2 00:20:13.672 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:13.672 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:13.932 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:14.193 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:14.193 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2673772 00:20:14.193 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2673772 ']' 00:20:14.193 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2673772 00:20:14.193 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:20:14.193 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.193 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2673772 00:20:14.193 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.193 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.193 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2673772' 00:20:14.193 killing process with pid 2673772 00:20:14.193 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2673772 00:20:14.193 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2673772 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:14.454 00:20:14.454 real 0m51.056s 00:20:14.454 user 3m15.632s 00:20:14.454 sys 0m2.762s 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:14.454 ************************************ 00:20:14.454 END TEST nvmf_vfio_user 00:20:14.454 ************************************ 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.454 ************************************ 00:20:14.454 START TEST nvmf_vfio_user_nvme_compliance 00:20:14.454 ************************************ 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:14.454 * Looking for test storage... 00:20:14.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:20:14.454 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:14.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.716 --rc genhtml_branch_coverage=1 00:20:14.716 --rc genhtml_function_coverage=1 00:20:14.716 --rc genhtml_legend=1 00:20:14.716 --rc geninfo_all_blocks=1 00:20:14.716 --rc geninfo_unexecuted_blocks=1 00:20:14.716 00:20:14.716 ' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:14.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.716 --rc genhtml_branch_coverage=1 00:20:14.716 --rc genhtml_function_coverage=1 00:20:14.716 --rc genhtml_legend=1 00:20:14.716 --rc geninfo_all_blocks=1 00:20:14.716 --rc geninfo_unexecuted_blocks=1 00:20:14.716 00:20:14.716 ' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:14.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.716 --rc genhtml_branch_coverage=1 00:20:14.716 --rc genhtml_function_coverage=1 00:20:14.716 --rc genhtml_legend=1 00:20:14.716 --rc geninfo_all_blocks=1 00:20:14.716 --rc geninfo_unexecuted_blocks=1 00:20:14.716 00:20:14.716 ' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:14.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.716 --rc genhtml_branch_coverage=1 00:20:14.716 --rc genhtml_function_coverage=1 00:20:14.716 --rc genhtml_legend=1 00:20:14.716 --rc geninfo_all_blocks=1 00:20:14.716 --rc geninfo_unexecuted_blocks=1 00:20:14.716 00:20:14.716 ' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.716 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2674671 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2674671' 00:20:14.717 Process pid: 2674671 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2674671 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2674671 ']' 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:14.717 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:14.717 [2024-11-20 06:31:34.507889] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:20:14.717 [2024-11-20 06:31:34.507964] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.717 [2024-11-20 06:31:34.596600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.717 [2024-11-20 06:31:34.630365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.717 [2024-11-20 06:31:34.630399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.717 [2024-11-20 06:31:34.630405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.717 [2024-11-20 06:31:34.630410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.717 [2024-11-20 06:31:34.630414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.717 [2024-11-20 06:31:34.631740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.717 [2024-11-20 06:31:34.631893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.717 [2024-11-20 06:31:34.632017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.660 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:15.660 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:20:15.660 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:16.602 malloc0 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.602 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:16.603 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.603 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:16.603 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.603 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:16.603 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.603 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:16.603 00:20:16.603 00:20:16.603 CUnit - A unit testing framework for C - Version 2.1-3 00:20:16.603 http://cunit.sourceforge.net/ 00:20:16.603 00:20:16.603 00:20:16.603 Suite: nvme_compliance 00:20:16.864 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 06:31:36.558101] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:16.864 [2024-11-20 06:31:36.559398] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:16.864 [2024-11-20 06:31:36.559411] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:16.864 [2024-11-20 06:31:36.559416] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:16.864 [2024-11-20 06:31:36.561119] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:16.864 passed 00:20:16.864 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 06:31:36.638646] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:16.864 [2024-11-20 06:31:36.641668] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:16.864 passed 00:20:16.864 Test: admin_identify_ns ...[2024-11-20 06:31:36.719224] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:16.864 [2024-11-20 06:31:36.779760] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:17.125 [2024-11-20 06:31:36.787754] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:17.125 [2024-11-20 06:31:36.808830] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.125 passed 00:20:17.125 Test: admin_get_features_mandatory_features ...[2024-11-20 06:31:36.882048] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.125 [2024-11-20 06:31:36.885067] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.125 passed 00:20:17.125 Test: admin_get_features_optional_features ...[2024-11-20 06:31:36.961526] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.125 [2024-11-20 06:31:36.964545] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.125 passed 00:20:17.125 Test: admin_set_features_number_of_queues ...[2024-11-20 06:31:37.039096] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.386 [2024-11-20 06:31:37.146851] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.386 passed 00:20:17.386 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 06:31:37.222091] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.386 [2024-11-20 06:31:37.225112] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.386 passed 00:20:17.386 Test: admin_get_log_page_with_lpo ...[2024-11-20 06:31:37.298088] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.647 [2024-11-20 06:31:37.369756] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:17.647 [2024-11-20 06:31:37.382793] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.647 passed 00:20:17.647 Test: fabric_property_get ...[2024-11-20 06:31:37.456016] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.647 [2024-11-20 06:31:37.457218] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:17.647 [2024-11-20 06:31:37.459039] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.647 passed 00:20:17.647 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 06:31:37.535513] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.647 [2024-11-20 06:31:37.536719] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:17.647 [2024-11-20 06:31:37.538538] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.908 passed 00:20:17.908 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 06:31:37.614264] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.908 [2024-11-20 06:31:37.698754] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:17.908 [2024-11-20 06:31:37.714749] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:17.908 [2024-11-20 06:31:37.719835] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.908 passed 00:20:17.908 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 06:31:37.794063] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:17.908 [2024-11-20 06:31:37.795259] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:17.908 [2024-11-20 06:31:37.797081] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:17.908 passed 00:20:18.169 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 06:31:37.871794] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:18.169 [2024-11-20 06:31:37.947748] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:18.169 [2024-11-20 06:31:37.971748] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:18.169 [2024-11-20 06:31:37.976818] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:18.169 passed 00:20:18.169 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 06:31:38.051826] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:18.169 [2024-11-20 06:31:38.053027] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:18.169 [2024-11-20 06:31:38.053046] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:18.169 [2024-11-20 06:31:38.054846] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:18.169 passed 00:20:18.429 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 06:31:38.133108] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:18.429 [2024-11-20 06:31:38.225754] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:18.429 [2024-11-20 06:31:38.233753] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:18.429 [2024-11-20 06:31:38.241750] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:18.429 [2024-11-20 06:31:38.249752] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:18.429 [2024-11-20 06:31:38.278826] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:18.429 passed 00:20:18.690 Test: admin_create_io_sq_verify_pc ...[2024-11-20 06:31:38.351021] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:18.690 [2024-11-20 06:31:38.369756] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:18.690 [2024-11-20 06:31:38.387178] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:18.690 passed 00:20:18.690 Test: admin_create_io_qp_max_qps ...[2024-11-20 06:31:38.462618] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.074 [2024-11-20 06:31:39.564752] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:20.074 [2024-11-20 06:31:39.953001] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.074 passed 00:20:20.335 Test: admin_create_io_sq_shared_cq ...[2024-11-20 06:31:40.028875] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:20.335 [2024-11-20 06:31:40.162762] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:20.335 [2024-11-20 06:31:40.199800] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:20.335 passed 00:20:20.335 00:20:20.335 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.335 suites 1 1 n/a 0 0 00:20:20.335 tests 18 18 18 0 0 00:20:20.335 asserts 360 360 360 0 n/a 00:20:20.335 00:20:20.335 Elapsed time = 1.496 seconds 00:20:20.335 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2674671 00:20:20.335 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2674671 ']' 00:20:20.335 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2674671 00:20:20.335 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:20:20.335 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.335 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2674671 00:20:20.596 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.596 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.596 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2674671' 00:20:20.596 killing process with pid 2674671 00:20:20.596 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2674671 00:20:20.596 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2674671 00:20:20.596 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:20.596 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:20.596 00:20:20.596 real 0m6.212s 00:20:20.596 user 0m17.607s 00:20:20.597 sys 0m0.513s 00:20:20.597 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:20.597 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:20.597 ************************************ 00:20:20.597 END TEST nvmf_vfio_user_nvme_compliance 00:20:20.597 ************************************ 00:20:20.597 06:31:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:20.597 06:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:20.597 06:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:20.597 06:31:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:20.597 ************************************ 00:20:20.597 START TEST nvmf_vfio_user_fuzz 00:20:20.597 ************************************ 00:20:20.597 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:20.858 * Looking for test storage... 00:20:20.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.859 --rc genhtml_branch_coverage=1 00:20:20.859 --rc genhtml_function_coverage=1 00:20:20.859 --rc genhtml_legend=1 00:20:20.859 --rc geninfo_all_blocks=1 00:20:20.859 --rc geninfo_unexecuted_blocks=1 00:20:20.859 00:20:20.859 ' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.859 --rc genhtml_branch_coverage=1 00:20:20.859 --rc genhtml_function_coverage=1 00:20:20.859 --rc genhtml_legend=1 00:20:20.859 --rc geninfo_all_blocks=1 00:20:20.859 --rc geninfo_unexecuted_blocks=1 00:20:20.859 00:20:20.859 ' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.859 --rc genhtml_branch_coverage=1 00:20:20.859 --rc genhtml_function_coverage=1 00:20:20.859 --rc genhtml_legend=1 00:20:20.859 --rc geninfo_all_blocks=1 00:20:20.859 --rc geninfo_unexecuted_blocks=1 00:20:20.859 00:20:20.859 ' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:20.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.859 --rc genhtml_branch_coverage=1 00:20:20.859 --rc genhtml_function_coverage=1 00:20:20.859 --rc genhtml_legend=1 00:20:20.859 --rc geninfo_all_blocks=1 00:20:20.859 --rc geninfo_unexecuted_blocks=1 00:20:20.859 00:20:20.859 ' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.859 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:20.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2675932 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2675932' 00:20:20.860 Process pid: 2675932 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2675932 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2675932 ']' 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.860 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:21.801 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.801 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:20:21.801 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:22.738 malloc0 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.738 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:22.998 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.998 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:22.998 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.998 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:22.998 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.998 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:22.998 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:55.116 Fuzzing completed. Shutting down the fuzz application 00:20:55.116 00:20:55.116 Dumping successful admin opcodes: 00:20:55.116 8, 9, 10, 24, 00:20:55.116 Dumping successful io opcodes: 00:20:55.116 0, 00:20:55.116 NS: 0x20000081ef00 I/O qp, Total commands completed: 1422468, total successful commands: 5591, random_seed: 1953930048 00:20:55.116 NS: 0x20000081ef00 admin qp, Total commands completed: 328370, total successful commands: 2640, random_seed: 1782390912 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2675932 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2675932 ']' 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2675932 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2675932 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2675932' 00:20:55.116 killing process with pid 2675932 00:20:55.116 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2675932 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2675932 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:55.117 00:20:55.117 real 0m32.819s 00:20:55.117 user 0m37.282s 00:20:55.117 sys 0m24.125s 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:55.117 ************************************ 00:20:55.117 END TEST nvmf_vfio_user_fuzz 00:20:55.117 ************************************ 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:55.117 ************************************ 00:20:55.117 START TEST nvmf_auth_target 00:20:55.117 ************************************ 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:55.117 * Looking for test storage... 00:20:55.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.117 --rc genhtml_branch_coverage=1 00:20:55.117 --rc genhtml_function_coverage=1 00:20:55.117 --rc genhtml_legend=1 00:20:55.117 --rc geninfo_all_blocks=1 00:20:55.117 --rc geninfo_unexecuted_blocks=1 00:20:55.117 00:20:55.117 ' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.117 --rc genhtml_branch_coverage=1 00:20:55.117 --rc genhtml_function_coverage=1 00:20:55.117 --rc genhtml_legend=1 00:20:55.117 --rc geninfo_all_blocks=1 00:20:55.117 --rc geninfo_unexecuted_blocks=1 00:20:55.117 00:20:55.117 ' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.117 --rc genhtml_branch_coverage=1 00:20:55.117 --rc genhtml_function_coverage=1 00:20:55.117 --rc genhtml_legend=1 00:20:55.117 --rc geninfo_all_blocks=1 00:20:55.117 --rc geninfo_unexecuted_blocks=1 00:20:55.117 00:20:55.117 ' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:55.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.117 --rc genhtml_branch_coverage=1 00:20:55.117 --rc genhtml_function_coverage=1 00:20:55.117 --rc genhtml_legend=1 00:20:55.117 --rc geninfo_all_blocks=1 00:20:55.117 --rc geninfo_unexecuted_blocks=1 00:20:55.117 00:20:55.117 ' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.117 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.118 06:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:01.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:01.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:01.709 Found net devices under 0000:31:00.0: cvl_0_0 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:01.709 Found net devices under 0000:31:00.1: cvl_0_1 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:01.709 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.710 06:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:01.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:21:01.710 00:21:01.710 --- 10.0.0.2 ping statistics --- 00:21:01.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.710 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:21:01.710 00:21:01.710 --- 10.0.0.1 ping statistics --- 00:21:01.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.710 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2686084 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2686084 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2686084 ']' 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:01.710 06:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.282 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:02.282 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:02.282 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.282 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.282 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.282 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.282 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2686124 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ab146573466871f5ad21a2738b1ea9165cf898cebd640c4d 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Blx 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ab146573466871f5ad21a2738b1ea9165cf898cebd640c4d 0 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ab146573466871f5ad21a2738b1ea9165cf898cebd640c4d 0 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ab146573466871f5ad21a2738b1ea9165cf898cebd640c4d 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:02.283 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Blx 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Blx 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Blx 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fc86d3860882267313dd2f240a73a22c35c136fc15157e0e502caae5eef6f116 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.usM 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fc86d3860882267313dd2f240a73a22c35c136fc15157e0e502caae5eef6f116 3 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fc86d3860882267313dd2f240a73a22c35c136fc15157e0e502caae5eef6f116 3 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fc86d3860882267313dd2f240a73a22c35c136fc15157e0e502caae5eef6f116 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.usM 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.usM 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.usM 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=36f422c10bdd512ce82326f74bbb5387 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dk1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 36f422c10bdd512ce82326f74bbb5387 1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 36f422c10bdd512ce82326f74bbb5387 1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=36f422c10bdd512ce82326f74bbb5387 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dk1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dk1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dk1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=966e66a0d0b3ee92fe999f4746c9ab79e8e0018aece38e62 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GA3 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 966e66a0d0b3ee92fe999f4746c9ab79e8e0018aece38e62 2 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 966e66a0d0b3ee92fe999f4746c9ab79e8e0018aece38e62 2 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=966e66a0d0b3ee92fe999f4746c9ab79e8e0018aece38e62 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GA3 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GA3 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.GA3 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:02.545 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:02.808 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:02.808 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fe1ca9aea47d69fcaae2846e5beedd65d63d06de0ce1a456 00:21:02.808 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:02.808 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iSo 00:21:02.808 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fe1ca9aea47d69fcaae2846e5beedd65d63d06de0ce1a456 2 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fe1ca9aea47d69fcaae2846e5beedd65d63d06de0ce1a456 2 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fe1ca9aea47d69fcaae2846e5beedd65d63d06de0ce1a456 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iSo 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iSo 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.iSo 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bba8866e6bdd870f74aab1b93a069293 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.iyI 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bba8866e6bdd870f74aab1b93a069293 1 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bba8866e6bdd870f74aab1b93a069293 1 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bba8866e6bdd870f74aab1b93a069293 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.iyI 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.iyI 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.iyI 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6af2bb42c3505e99e23e2f20250c917936393522a7c12330d450508eab9e9350 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.C37 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6af2bb42c3505e99e23e2f20250c917936393522a7c12330d450508eab9e9350 3 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6af2bb42c3505e99e23e2f20250c917936393522a7c12330d450508eab9e9350 3 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6af2bb42c3505e99e23e2f20250c917936393522a7c12330d450508eab9e9350 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.C37 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.C37 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.C37 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2686084 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2686084 ']' 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:02.809 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.071 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:03.071 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:03.071 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2686124 /var/tmp/host.sock 00:21:03.071 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2686124 ']' 00:21:03.071 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:21:03.071 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.072 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:03.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:03.072 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.072 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Blx 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Blx 00:21:03.332 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Blx 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.usM ]] 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.usM 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.usM 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.usM 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dk1 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.604 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.605 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.605 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dk1 00:21:03.605 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dk1 00:21:03.892 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.GA3 ]] 00:21:03.892 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GA3 00:21:03.892 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.892 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.892 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.892 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GA3 00:21:03.892 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GA3 00:21:04.153 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:04.153 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iSo 00:21:04.153 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.153 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.153 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.153 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.iSo 00:21:04.153 06:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.iSo 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.iyI ]] 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iyI 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iyI 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iyI 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.C37 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.C37 00:21:04.415 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.C37 00:21:04.676 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:04.676 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:04.676 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.676 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.676 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:04.676 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.937 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.198 00:21:05.198 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.198 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.198 06:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.459 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.459 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.459 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.460 { 00:21:05.460 "cntlid": 1, 00:21:05.460 "qid": 0, 00:21:05.460 "state": "enabled", 00:21:05.460 "thread": "nvmf_tgt_poll_group_000", 00:21:05.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:05.460 "listen_address": { 00:21:05.460 "trtype": "TCP", 00:21:05.460 "adrfam": "IPv4", 00:21:05.460 "traddr": "10.0.0.2", 00:21:05.460 "trsvcid": "4420" 00:21:05.460 }, 00:21:05.460 "peer_address": { 00:21:05.460 "trtype": "TCP", 00:21:05.460 "adrfam": "IPv4", 00:21:05.460 "traddr": "10.0.0.1", 00:21:05.460 "trsvcid": "46028" 00:21:05.460 }, 00:21:05.460 "auth": { 00:21:05.460 "state": "completed", 00:21:05.460 "digest": "sha256", 00:21:05.460 "dhgroup": "null" 00:21:05.460 } 00:21:05.460 } 00:21:05.460 ]' 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.460 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.721 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:05.721 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:06.293 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.554 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.814 00:21:06.814 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.814 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.814 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.075 { 00:21:07.075 "cntlid": 3, 00:21:07.075 "qid": 0, 00:21:07.075 "state": "enabled", 00:21:07.075 "thread": "nvmf_tgt_poll_group_000", 00:21:07.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:07.075 "listen_address": { 00:21:07.075 "trtype": "TCP", 00:21:07.075 "adrfam": "IPv4", 00:21:07.075 "traddr": "10.0.0.2", 00:21:07.075 "trsvcid": "4420" 00:21:07.075 }, 00:21:07.075 "peer_address": { 00:21:07.075 "trtype": "TCP", 00:21:07.075 "adrfam": "IPv4", 00:21:07.075 "traddr": "10.0.0.1", 00:21:07.075 "trsvcid": "46058" 00:21:07.075 }, 00:21:07.075 "auth": { 00:21:07.075 "state": "completed", 00:21:07.075 "digest": "sha256", 00:21:07.075 "dhgroup": "null" 00:21:07.075 } 00:21:07.075 } 00:21:07.075 ]' 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.075 06:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.336 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:07.336 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:07.909 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.170 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.430 00:21:08.430 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.430 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.430 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.430 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.430 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.430 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.430 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.690 { 00:21:08.690 "cntlid": 5, 00:21:08.690 "qid": 0, 00:21:08.690 "state": "enabled", 00:21:08.690 "thread": "nvmf_tgt_poll_group_000", 00:21:08.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:08.690 "listen_address": { 00:21:08.690 "trtype": "TCP", 00:21:08.690 "adrfam": "IPv4", 00:21:08.690 "traddr": "10.0.0.2", 00:21:08.690 "trsvcid": "4420" 00:21:08.690 }, 00:21:08.690 "peer_address": { 00:21:08.690 "trtype": "TCP", 00:21:08.690 "adrfam": "IPv4", 00:21:08.690 "traddr": "10.0.0.1", 00:21:08.690 "trsvcid": "46078" 00:21:08.690 }, 00:21:08.690 "auth": { 00:21:08.690 "state": "completed", 00:21:08.690 "digest": "sha256", 00:21:08.690 "dhgroup": "null" 00:21:08.690 } 00:21:08.690 } 00:21:08.690 ]' 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.690 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.951 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:08.951 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:09.521 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.782 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.043 00:21:10.043 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.043 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.043 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.304 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.304 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.304 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.304 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.304 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.304 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.304 { 00:21:10.304 "cntlid": 7, 00:21:10.304 "qid": 0, 00:21:10.304 "state": "enabled", 00:21:10.304 "thread": "nvmf_tgt_poll_group_000", 00:21:10.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:10.304 "listen_address": { 00:21:10.304 "trtype": "TCP", 00:21:10.304 "adrfam": "IPv4", 00:21:10.304 "traddr": "10.0.0.2", 00:21:10.304 "trsvcid": "4420" 00:21:10.304 }, 00:21:10.304 "peer_address": { 00:21:10.304 "trtype": "TCP", 00:21:10.304 "adrfam": "IPv4", 00:21:10.304 "traddr": "10.0.0.1", 00:21:10.304 "trsvcid": "46102" 00:21:10.304 }, 00:21:10.304 "auth": { 00:21:10.304 "state": "completed", 00:21:10.304 "digest": "sha256", 00:21:10.304 "dhgroup": "null" 00:21:10.304 } 00:21:10.304 } 00:21:10.304 ]' 00:21:10.304 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.304 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.304 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.304 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.304 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.304 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.304 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.304 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.565 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:10.565 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.136 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.397 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.658 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.658 { 00:21:11.658 "cntlid": 9, 00:21:11.658 "qid": 0, 00:21:11.658 "state": "enabled", 00:21:11.658 "thread": "nvmf_tgt_poll_group_000", 00:21:11.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:11.658 "listen_address": { 00:21:11.658 "trtype": "TCP", 00:21:11.658 "adrfam": "IPv4", 00:21:11.658 "traddr": "10.0.0.2", 00:21:11.658 "trsvcid": "4420" 00:21:11.658 }, 00:21:11.658 "peer_address": { 00:21:11.658 "trtype": "TCP", 00:21:11.658 "adrfam": "IPv4", 00:21:11.658 "traddr": "10.0.0.1", 00:21:11.658 "trsvcid": "46128" 00:21:11.658 }, 00:21:11.658 "auth": { 00:21:11.658 "state": "completed", 00:21:11.658 "digest": "sha256", 00:21:11.658 "dhgroup": "ffdhe2048" 00:21:11.658 } 00:21:11.658 } 00:21:11.658 ]' 00:21:11.658 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.919 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.919 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.919 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.919 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.919 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.919 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.919 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.179 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:12.179 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:12.750 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.011 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.271 00:21:13.271 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.272 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.272 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.272 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.272 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.272 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.272 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.272 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.272 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.272 { 00:21:13.272 "cntlid": 11, 00:21:13.272 "qid": 0, 00:21:13.272 "state": "enabled", 00:21:13.272 "thread": "nvmf_tgt_poll_group_000", 00:21:13.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:13.272 "listen_address": { 00:21:13.272 "trtype": "TCP", 00:21:13.272 "adrfam": "IPv4", 00:21:13.272 "traddr": "10.0.0.2", 00:21:13.272 "trsvcid": "4420" 00:21:13.272 }, 00:21:13.272 "peer_address": { 00:21:13.272 "trtype": "TCP", 00:21:13.272 "adrfam": "IPv4", 00:21:13.272 "traddr": "10.0.0.1", 00:21:13.272 "trsvcid": "53322" 00:21:13.272 }, 00:21:13.272 "auth": { 00:21:13.272 "state": "completed", 00:21:13.272 "digest": "sha256", 00:21:13.272 "dhgroup": "ffdhe2048" 00:21:13.272 } 00:21:13.272 } 00:21:13.272 ]' 00:21:13.272 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:13.537 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:14.543 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.544 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.805 00:21:14.805 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.805 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.805 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.805 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.805 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.805 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.805 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.065 { 00:21:15.065 "cntlid": 13, 00:21:15.065 "qid": 0, 00:21:15.065 "state": "enabled", 00:21:15.065 "thread": "nvmf_tgt_poll_group_000", 00:21:15.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:15.065 "listen_address": { 00:21:15.065 "trtype": "TCP", 00:21:15.065 "adrfam": "IPv4", 00:21:15.065 "traddr": "10.0.0.2", 00:21:15.065 "trsvcid": "4420" 00:21:15.065 }, 00:21:15.065 "peer_address": { 00:21:15.065 "trtype": "TCP", 00:21:15.065 "adrfam": "IPv4", 00:21:15.065 "traddr": "10.0.0.1", 00:21:15.065 "trsvcid": "53346" 00:21:15.065 }, 00:21:15.065 "auth": { 00:21:15.065 "state": "completed", 00:21:15.065 "digest": "sha256", 00:21:15.065 "dhgroup": "ffdhe2048" 00:21:15.065 } 00:21:15.065 } 00:21:15.065 ]' 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.065 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.324 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:15.325 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:15.895 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.895 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:15.896 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.896 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.896 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.896 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.896 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:15.896 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.157 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.418 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.418 { 00:21:16.418 "cntlid": 15, 00:21:16.418 "qid": 0, 00:21:16.418 "state": "enabled", 00:21:16.418 "thread": "nvmf_tgt_poll_group_000", 00:21:16.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:16.418 "listen_address": { 00:21:16.418 "trtype": "TCP", 00:21:16.418 "adrfam": "IPv4", 00:21:16.418 "traddr": "10.0.0.2", 00:21:16.418 "trsvcid": "4420" 00:21:16.418 }, 00:21:16.418 "peer_address": { 00:21:16.418 "trtype": "TCP", 00:21:16.418 "adrfam": "IPv4", 00:21:16.418 "traddr": "10.0.0.1", 00:21:16.418 "trsvcid": "53362" 00:21:16.418 }, 00:21:16.418 "auth": { 00:21:16.418 "state": "completed", 00:21:16.418 "digest": "sha256", 00:21:16.418 "dhgroup": "ffdhe2048" 00:21:16.418 } 00:21:16.418 } 00:21:16.418 ]' 00:21:16.418 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.679 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.679 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.679 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.679 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.679 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.679 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.679 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.939 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:16.939 06:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:17.509 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.770 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.032 00:21:18.032 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.032 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.032 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.032 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.293 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.293 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.293 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.293 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.293 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.293 { 00:21:18.293 "cntlid": 17, 00:21:18.293 "qid": 0, 00:21:18.293 "state": "enabled", 00:21:18.293 "thread": "nvmf_tgt_poll_group_000", 00:21:18.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:18.293 "listen_address": { 00:21:18.293 "trtype": "TCP", 00:21:18.293 "adrfam": "IPv4", 00:21:18.293 "traddr": "10.0.0.2", 00:21:18.293 "trsvcid": "4420" 00:21:18.293 }, 00:21:18.293 "peer_address": { 00:21:18.293 "trtype": "TCP", 00:21:18.293 "adrfam": "IPv4", 00:21:18.293 "traddr": "10.0.0.1", 00:21:18.293 "trsvcid": "53400" 00:21:18.293 }, 00:21:18.293 "auth": { 00:21:18.293 "state": "completed", 00:21:18.293 "digest": "sha256", 00:21:18.293 "dhgroup": "ffdhe3072" 00:21:18.293 } 00:21:18.293 } 00:21:18.293 ]' 00:21:18.293 06:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.293 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.293 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.293 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.293 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.293 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.293 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.293 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.555 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:18.555 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:19.127 06:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.388 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.648 00:21:19.648 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.648 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.648 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.909 { 00:21:19.909 "cntlid": 19, 00:21:19.909 "qid": 0, 00:21:19.909 "state": "enabled", 00:21:19.909 "thread": "nvmf_tgt_poll_group_000", 00:21:19.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:19.909 "listen_address": { 00:21:19.909 "trtype": "TCP", 00:21:19.909 "adrfam": "IPv4", 00:21:19.909 "traddr": "10.0.0.2", 00:21:19.909 "trsvcid": "4420" 00:21:19.909 }, 00:21:19.909 "peer_address": { 00:21:19.909 "trtype": "TCP", 00:21:19.909 "adrfam": "IPv4", 00:21:19.909 "traddr": "10.0.0.1", 00:21:19.909 "trsvcid": "53430" 00:21:19.909 }, 00:21:19.909 "auth": { 00:21:19.909 "state": "completed", 00:21:19.909 "digest": "sha256", 00:21:19.909 "dhgroup": "ffdhe3072" 00:21:19.909 } 00:21:19.909 } 00:21:19.909 ]' 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.909 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.910 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.910 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.171 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:20.171 06:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:20.743 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:21.004 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:21.004 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.004 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.004 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:21.004 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.004 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.004 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.005 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.005 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.005 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.005 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.005 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.005 06:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.266 00:21:21.266 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.266 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.266 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.528 { 00:21:21.528 "cntlid": 21, 00:21:21.528 "qid": 0, 00:21:21.528 "state": "enabled", 00:21:21.528 "thread": "nvmf_tgt_poll_group_000", 00:21:21.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:21.528 "listen_address": { 00:21:21.528 "trtype": "TCP", 00:21:21.528 "adrfam": "IPv4", 00:21:21.528 "traddr": "10.0.0.2", 00:21:21.528 "trsvcid": "4420" 00:21:21.528 }, 00:21:21.528 "peer_address": { 00:21:21.528 "trtype": "TCP", 00:21:21.528 "adrfam": "IPv4", 00:21:21.528 "traddr": "10.0.0.1", 00:21:21.528 "trsvcid": "53462" 00:21:21.528 }, 00:21:21.528 "auth": { 00:21:21.528 "state": "completed", 00:21:21.528 "digest": "sha256", 00:21:21.528 "dhgroup": "ffdhe3072" 00:21:21.528 } 00:21:21.528 } 00:21:21.528 ]' 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.528 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.789 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:21.789 06:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:22.360 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.621 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.882 00:21:22.882 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.882 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.882 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.144 { 00:21:23.144 "cntlid": 23, 00:21:23.144 "qid": 0, 00:21:23.144 "state": "enabled", 00:21:23.144 "thread": "nvmf_tgt_poll_group_000", 00:21:23.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:23.144 "listen_address": { 00:21:23.144 "trtype": "TCP", 00:21:23.144 "adrfam": "IPv4", 00:21:23.144 "traddr": "10.0.0.2", 00:21:23.144 "trsvcid": "4420" 00:21:23.144 }, 00:21:23.144 "peer_address": { 00:21:23.144 "trtype": "TCP", 00:21:23.144 "adrfam": "IPv4", 00:21:23.144 "traddr": "10.0.0.1", 00:21:23.144 "trsvcid": "53480" 00:21:23.144 }, 00:21:23.144 "auth": { 00:21:23.144 "state": "completed", 00:21:23.144 "digest": "sha256", 00:21:23.144 "dhgroup": "ffdhe3072" 00:21:23.144 } 00:21:23.144 } 00:21:23.144 ]' 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.144 06:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.405 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:23.405 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:23.977 06:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.237 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.497 00:21:24.497 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.497 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.497 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.758 { 00:21:24.758 "cntlid": 25, 00:21:24.758 "qid": 0, 00:21:24.758 "state": "enabled", 00:21:24.758 "thread": "nvmf_tgt_poll_group_000", 00:21:24.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:24.758 "listen_address": { 00:21:24.758 "trtype": "TCP", 00:21:24.758 "adrfam": "IPv4", 00:21:24.758 "traddr": "10.0.0.2", 00:21:24.758 "trsvcid": "4420" 00:21:24.758 }, 00:21:24.758 "peer_address": { 00:21:24.758 "trtype": "TCP", 00:21:24.758 "adrfam": "IPv4", 00:21:24.758 "traddr": "10.0.0.1", 00:21:24.758 "trsvcid": "36792" 00:21:24.758 }, 00:21:24.758 "auth": { 00:21:24.758 "state": "completed", 00:21:24.758 "digest": "sha256", 00:21:24.758 "dhgroup": "ffdhe4096" 00:21:24.758 } 00:21:24.758 } 00:21:24.758 ]' 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.758 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.019 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:25.019 06:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:25.591 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:25.851 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:25.851 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.851 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:25.851 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.851 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.851 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.851 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.852 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.852 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.852 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.852 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.852 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.852 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.112 00:21:26.112 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.112 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.112 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.372 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.372 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.372 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.372 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.372 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.372 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.372 { 00:21:26.372 "cntlid": 27, 00:21:26.372 "qid": 0, 00:21:26.372 "state": "enabled", 00:21:26.372 "thread": "nvmf_tgt_poll_group_000", 00:21:26.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:26.372 "listen_address": { 00:21:26.373 "trtype": "TCP", 00:21:26.373 "adrfam": "IPv4", 00:21:26.373 "traddr": "10.0.0.2", 00:21:26.373 "trsvcid": "4420" 00:21:26.373 }, 00:21:26.373 "peer_address": { 00:21:26.373 "trtype": "TCP", 00:21:26.373 "adrfam": "IPv4", 00:21:26.373 "traddr": "10.0.0.1", 00:21:26.373 "trsvcid": "36800" 00:21:26.373 }, 00:21:26.373 "auth": { 00:21:26.373 "state": "completed", 00:21:26.373 "digest": "sha256", 00:21:26.373 "dhgroup": "ffdhe4096" 00:21:26.373 } 00:21:26.373 } 00:21:26.373 ]' 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.373 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.633 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:26.633 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:27.204 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.465 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.726 00:21:27.726 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.726 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.726 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.987 { 00:21:27.987 "cntlid": 29, 00:21:27.987 "qid": 0, 00:21:27.987 "state": "enabled", 00:21:27.987 "thread": "nvmf_tgt_poll_group_000", 00:21:27.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:27.987 "listen_address": { 00:21:27.987 "trtype": "TCP", 00:21:27.987 "adrfam": "IPv4", 00:21:27.987 "traddr": "10.0.0.2", 00:21:27.987 "trsvcid": "4420" 00:21:27.987 }, 00:21:27.987 "peer_address": { 00:21:27.987 "trtype": "TCP", 00:21:27.987 "adrfam": "IPv4", 00:21:27.987 "traddr": "10.0.0.1", 00:21:27.987 "trsvcid": "36822" 00:21:27.987 }, 00:21:27.987 "auth": { 00:21:27.987 "state": "completed", 00:21:27.987 "digest": "sha256", 00:21:27.987 "dhgroup": "ffdhe4096" 00:21:27.987 } 00:21:27.987 } 00:21:27.987 ]' 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.987 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.248 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.248 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.248 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.248 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:28.248 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:29.192 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.193 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.456 00:21:29.456 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.456 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.456 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.717 { 00:21:29.717 "cntlid": 31, 00:21:29.717 "qid": 0, 00:21:29.717 "state": "enabled", 00:21:29.717 "thread": "nvmf_tgt_poll_group_000", 00:21:29.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:29.717 "listen_address": { 00:21:29.717 "trtype": "TCP", 00:21:29.717 "adrfam": "IPv4", 00:21:29.717 "traddr": "10.0.0.2", 00:21:29.717 "trsvcid": "4420" 00:21:29.717 }, 00:21:29.717 "peer_address": { 00:21:29.717 "trtype": "TCP", 00:21:29.717 "adrfam": "IPv4", 00:21:29.717 "traddr": "10.0.0.1", 00:21:29.717 "trsvcid": "36832" 00:21:29.717 }, 00:21:29.717 "auth": { 00:21:29.717 "state": "completed", 00:21:29.717 "digest": "sha256", 00:21:29.717 "dhgroup": "ffdhe4096" 00:21:29.717 } 00:21:29.717 } 00:21:29.717 ]' 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.717 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.977 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:29.977 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:30.550 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.550 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:30.550 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.550 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.550 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.550 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.551 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.551 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:30.551 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.812 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.074 00:21:31.074 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.074 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.074 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.335 { 00:21:31.335 "cntlid": 33, 00:21:31.335 "qid": 0, 00:21:31.335 "state": "enabled", 00:21:31.335 "thread": "nvmf_tgt_poll_group_000", 00:21:31.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:31.335 "listen_address": { 00:21:31.335 "trtype": "TCP", 00:21:31.335 "adrfam": "IPv4", 00:21:31.335 "traddr": "10.0.0.2", 00:21:31.335 "trsvcid": "4420" 00:21:31.335 }, 00:21:31.335 "peer_address": { 00:21:31.335 "trtype": "TCP", 00:21:31.335 "adrfam": "IPv4", 00:21:31.335 "traddr": "10.0.0.1", 00:21:31.335 "trsvcid": "36866" 00:21:31.335 }, 00:21:31.335 "auth": { 00:21:31.335 "state": "completed", 00:21:31.335 "digest": "sha256", 00:21:31.335 "dhgroup": "ffdhe6144" 00:21:31.335 } 00:21:31.335 } 00:21:31.335 ]' 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.335 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.596 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:31.596 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:32.165 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.165 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:32.165 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.165 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.426 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.686 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.947 { 00:21:32.947 "cntlid": 35, 00:21:32.947 "qid": 0, 00:21:32.947 "state": "enabled", 00:21:32.947 "thread": "nvmf_tgt_poll_group_000", 00:21:32.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:32.947 "listen_address": { 00:21:32.947 "trtype": "TCP", 00:21:32.947 "adrfam": "IPv4", 00:21:32.947 "traddr": "10.0.0.2", 00:21:32.947 "trsvcid": "4420" 00:21:32.947 }, 00:21:32.947 "peer_address": { 00:21:32.947 "trtype": "TCP", 00:21:32.947 "adrfam": "IPv4", 00:21:32.947 "traddr": "10.0.0.1", 00:21:32.947 "trsvcid": "36892" 00:21:32.947 }, 00:21:32.947 "auth": { 00:21:32.947 "state": "completed", 00:21:32.947 "digest": "sha256", 00:21:32.947 "dhgroup": "ffdhe6144" 00:21:32.947 } 00:21:32.947 } 00:21:32.947 ]' 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:32.947 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.208 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:33.208 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.208 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.208 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.208 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.208 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:33.208 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.218 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.480 00:21:34.480 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.480 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.480 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.741 { 00:21:34.741 "cntlid": 37, 00:21:34.741 "qid": 0, 00:21:34.741 "state": "enabled", 00:21:34.741 "thread": "nvmf_tgt_poll_group_000", 00:21:34.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:34.741 "listen_address": { 00:21:34.741 "trtype": "TCP", 00:21:34.741 "adrfam": "IPv4", 00:21:34.741 "traddr": "10.0.0.2", 00:21:34.741 "trsvcid": "4420" 00:21:34.741 }, 00:21:34.741 "peer_address": { 00:21:34.741 "trtype": "TCP", 00:21:34.741 "adrfam": "IPv4", 00:21:34.741 "traddr": "10.0.0.1", 00:21:34.741 "trsvcid": "51168" 00:21:34.741 }, 00:21:34.741 "auth": { 00:21:34.741 "state": "completed", 00:21:34.741 "digest": "sha256", 00:21:34.741 "dhgroup": "ffdhe6144" 00:21:34.741 } 00:21:34.741 } 00:21:34.741 ]' 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.741 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.002 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.002 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.002 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.002 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:35.002 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.945 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.206 00:21:36.206 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.206 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.206 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.469 { 00:21:36.469 "cntlid": 39, 00:21:36.469 "qid": 0, 00:21:36.469 "state": "enabled", 00:21:36.469 "thread": "nvmf_tgt_poll_group_000", 00:21:36.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:36.469 "listen_address": { 00:21:36.469 "trtype": "TCP", 00:21:36.469 "adrfam": "IPv4", 00:21:36.469 "traddr": "10.0.0.2", 00:21:36.469 "trsvcid": "4420" 00:21:36.469 }, 00:21:36.469 "peer_address": { 00:21:36.469 "trtype": "TCP", 00:21:36.469 "adrfam": "IPv4", 00:21:36.469 "traddr": "10.0.0.1", 00:21:36.469 "trsvcid": "51200" 00:21:36.469 }, 00:21:36.469 "auth": { 00:21:36.469 "state": "completed", 00:21:36.469 "digest": "sha256", 00:21:36.469 "dhgroup": "ffdhe6144" 00:21:36.469 } 00:21:36.469 } 00:21:36.469 ]' 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.469 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.729 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:36.729 06:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:37.300 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.300 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:37.300 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.300 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.300 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.300 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.301 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.301 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:37.301 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.561 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.133 00:21:38.133 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.133 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.133 06:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.393 { 00:21:38.393 "cntlid": 41, 00:21:38.393 "qid": 0, 00:21:38.393 "state": "enabled", 00:21:38.393 "thread": "nvmf_tgt_poll_group_000", 00:21:38.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:38.393 "listen_address": { 00:21:38.393 "trtype": "TCP", 00:21:38.393 "adrfam": "IPv4", 00:21:38.393 "traddr": "10.0.0.2", 00:21:38.393 "trsvcid": "4420" 00:21:38.393 }, 00:21:38.393 "peer_address": { 00:21:38.393 "trtype": "TCP", 00:21:38.393 "adrfam": "IPv4", 00:21:38.393 "traddr": "10.0.0.1", 00:21:38.393 "trsvcid": "51230" 00:21:38.393 }, 00:21:38.393 "auth": { 00:21:38.393 "state": "completed", 00:21:38.393 "digest": "sha256", 00:21:38.393 "dhgroup": "ffdhe8192" 00:21:38.393 } 00:21:38.393 } 00:21:38.393 ]' 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.393 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.653 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:38.653 06:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:39.224 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.485 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.486 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.486 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.486 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.486 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.486 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.057 00:21:40.057 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.057 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.057 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.058 { 00:21:40.058 "cntlid": 43, 00:21:40.058 "qid": 0, 00:21:40.058 "state": "enabled", 00:21:40.058 "thread": "nvmf_tgt_poll_group_000", 00:21:40.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:40.058 "listen_address": { 00:21:40.058 "trtype": "TCP", 00:21:40.058 "adrfam": "IPv4", 00:21:40.058 "traddr": "10.0.0.2", 00:21:40.058 "trsvcid": "4420" 00:21:40.058 }, 00:21:40.058 "peer_address": { 00:21:40.058 "trtype": "TCP", 00:21:40.058 "adrfam": "IPv4", 00:21:40.058 "traddr": "10.0.0.1", 00:21:40.058 "trsvcid": "51264" 00:21:40.058 }, 00:21:40.058 "auth": { 00:21:40.058 "state": "completed", 00:21:40.058 "digest": "sha256", 00:21:40.058 "dhgroup": "ffdhe8192" 00:21:40.058 } 00:21:40.058 } 00:21:40.058 ]' 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.058 06:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.319 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.319 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.319 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.319 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.319 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.579 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:40.579 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:41.152 06:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.413 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.674 00:21:41.674 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.674 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.674 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.935 { 00:21:41.935 "cntlid": 45, 00:21:41.935 "qid": 0, 00:21:41.935 "state": "enabled", 00:21:41.935 "thread": "nvmf_tgt_poll_group_000", 00:21:41.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:41.935 "listen_address": { 00:21:41.935 "trtype": "TCP", 00:21:41.935 "adrfam": "IPv4", 00:21:41.935 "traddr": "10.0.0.2", 00:21:41.935 "trsvcid": "4420" 00:21:41.935 }, 00:21:41.935 "peer_address": { 00:21:41.935 "trtype": "TCP", 00:21:41.935 "adrfam": "IPv4", 00:21:41.935 "traddr": "10.0.0.1", 00:21:41.935 "trsvcid": "51298" 00:21:41.935 }, 00:21:41.935 "auth": { 00:21:41.935 "state": "completed", 00:21:41.935 "digest": "sha256", 00:21:41.935 "dhgroup": "ffdhe8192" 00:21:41.935 } 00:21:41.935 } 00:21:41.935 ]' 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.935 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.195 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.195 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.195 06:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.195 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:42.196 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.138 06:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.708 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.708 { 00:21:43.708 "cntlid": 47, 00:21:43.708 "qid": 0, 00:21:43.708 "state": "enabled", 00:21:43.708 "thread": "nvmf_tgt_poll_group_000", 00:21:43.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:43.708 "listen_address": { 00:21:43.708 "trtype": "TCP", 00:21:43.708 "adrfam": "IPv4", 00:21:43.708 "traddr": "10.0.0.2", 00:21:43.708 "trsvcid": "4420" 00:21:43.708 }, 00:21:43.708 "peer_address": { 00:21:43.708 "trtype": "TCP", 00:21:43.708 "adrfam": "IPv4", 00:21:43.708 "traddr": "10.0.0.1", 00:21:43.708 "trsvcid": "38072" 00:21:43.708 }, 00:21:43.708 "auth": { 00:21:43.708 "state": "completed", 00:21:43.708 "digest": "sha256", 00:21:43.708 "dhgroup": "ffdhe8192" 00:21:43.708 } 00:21:43.708 } 00:21:43.708 ]' 00:21:43.708 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.970 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.970 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.970 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.970 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.970 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.970 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.970 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.232 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:44.232 06:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:44.804 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.065 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.065 00:21:45.326 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.326 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.326 06:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.326 { 00:21:45.326 "cntlid": 49, 00:21:45.326 "qid": 0, 00:21:45.326 "state": "enabled", 00:21:45.326 "thread": "nvmf_tgt_poll_group_000", 00:21:45.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:45.326 "listen_address": { 00:21:45.326 "trtype": "TCP", 00:21:45.326 "adrfam": "IPv4", 00:21:45.326 "traddr": "10.0.0.2", 00:21:45.326 "trsvcid": "4420" 00:21:45.326 }, 00:21:45.326 "peer_address": { 00:21:45.326 "trtype": "TCP", 00:21:45.326 "adrfam": "IPv4", 00:21:45.326 "traddr": "10.0.0.1", 00:21:45.326 "trsvcid": "38116" 00:21:45.326 }, 00:21:45.326 "auth": { 00:21:45.326 "state": "completed", 00:21:45.326 "digest": "sha384", 00:21:45.326 "dhgroup": "null" 00:21:45.326 } 00:21:45.326 } 00:21:45.326 ]' 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.326 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.586 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.586 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.586 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.586 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.586 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.586 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.846 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:45.846 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:46.418 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.678 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.679 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.679 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.679 00:21:46.939 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.939 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.940 { 00:21:46.940 "cntlid": 51, 00:21:46.940 "qid": 0, 00:21:46.940 "state": "enabled", 00:21:46.940 "thread": "nvmf_tgt_poll_group_000", 00:21:46.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:46.940 "listen_address": { 00:21:46.940 "trtype": "TCP", 00:21:46.940 "adrfam": "IPv4", 00:21:46.940 "traddr": "10.0.0.2", 00:21:46.940 "trsvcid": "4420" 00:21:46.940 }, 00:21:46.940 "peer_address": { 00:21:46.940 "trtype": "TCP", 00:21:46.940 "adrfam": "IPv4", 00:21:46.940 "traddr": "10.0.0.1", 00:21:46.940 "trsvcid": "38134" 00:21:46.940 }, 00:21:46.940 "auth": { 00:21:46.940 "state": "completed", 00:21:46.940 "digest": "sha384", 00:21:46.940 "dhgroup": "null" 00:21:46.940 } 00:21:46.940 } 00:21:46.940 ]' 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.940 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.200 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:47.200 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.200 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.200 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.200 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.462 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:47.462 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:48.034 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.295 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.295 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.557 { 00:21:48.557 "cntlid": 53, 00:21:48.557 "qid": 0, 00:21:48.557 "state": "enabled", 00:21:48.557 "thread": "nvmf_tgt_poll_group_000", 00:21:48.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:48.557 "listen_address": { 00:21:48.557 "trtype": "TCP", 00:21:48.557 "adrfam": "IPv4", 00:21:48.557 "traddr": "10.0.0.2", 00:21:48.557 "trsvcid": "4420" 00:21:48.557 }, 00:21:48.557 "peer_address": { 00:21:48.557 "trtype": "TCP", 00:21:48.557 "adrfam": "IPv4", 00:21:48.557 "traddr": "10.0.0.1", 00:21:48.557 "trsvcid": "38154" 00:21:48.557 }, 00:21:48.557 "auth": { 00:21:48.557 "state": "completed", 00:21:48.557 "digest": "sha384", 00:21:48.557 "dhgroup": "null" 00:21:48.557 } 00:21:48.557 } 00:21:48.557 ]' 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.818 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:48.818 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.818 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.818 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.818 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.818 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:49.078 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:49.650 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.036 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.036 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.303 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.303 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.303 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.303 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.303 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.303 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.303 { 00:21:50.303 "cntlid": 55, 00:21:50.303 "qid": 0, 00:21:50.303 "state": "enabled", 00:21:50.303 "thread": "nvmf_tgt_poll_group_000", 00:21:50.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:50.303 "listen_address": { 00:21:50.303 "trtype": "TCP", 00:21:50.303 "adrfam": "IPv4", 00:21:50.303 "traddr": "10.0.0.2", 00:21:50.303 "trsvcid": "4420" 00:21:50.303 }, 00:21:50.303 "peer_address": { 00:21:50.303 "trtype": "TCP", 00:21:50.303 "adrfam": "IPv4", 00:21:50.303 "traddr": "10.0.0.1", 00:21:50.303 "trsvcid": "38190" 00:21:50.303 }, 00:21:50.303 "auth": { 00:21:50.303 "state": "completed", 00:21:50.303 "digest": "sha384", 00:21:50.303 "dhgroup": "null" 00:21:50.303 } 00:21:50.303 } 00:21:50.303 ]' 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.303 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.564 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:50.564 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.136 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.397 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.658 00:21:51.658 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.658 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.658 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.918 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.918 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.918 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.918 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.918 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.918 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.918 { 00:21:51.918 "cntlid": 57, 00:21:51.918 "qid": 0, 00:21:51.918 "state": "enabled", 00:21:51.918 "thread": "nvmf_tgt_poll_group_000", 00:21:51.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:51.919 "listen_address": { 00:21:51.919 "trtype": "TCP", 00:21:51.919 "adrfam": "IPv4", 00:21:51.919 "traddr": "10.0.0.2", 00:21:51.919 "trsvcid": "4420" 00:21:51.919 }, 00:21:51.919 "peer_address": { 00:21:51.919 "trtype": "TCP", 00:21:51.919 "adrfam": "IPv4", 00:21:51.919 "traddr": "10.0.0.1", 00:21:51.919 "trsvcid": "38222" 00:21:51.919 }, 00:21:51.919 "auth": { 00:21:51.919 "state": "completed", 00:21:51.919 "digest": "sha384", 00:21:51.919 "dhgroup": "ffdhe2048" 00:21:51.919 } 00:21:51.919 } 00:21:51.919 ]' 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.919 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.179 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:52.179 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:52.748 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.017 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.278 00:21:53.278 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.278 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.278 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.538 { 00:21:53.538 "cntlid": 59, 00:21:53.538 "qid": 0, 00:21:53.538 "state": "enabled", 00:21:53.538 "thread": "nvmf_tgt_poll_group_000", 00:21:53.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:53.538 "listen_address": { 00:21:53.538 "trtype": "TCP", 00:21:53.538 "adrfam": "IPv4", 00:21:53.538 "traddr": "10.0.0.2", 00:21:53.538 "trsvcid": "4420" 00:21:53.538 }, 00:21:53.538 "peer_address": { 00:21:53.538 "trtype": "TCP", 00:21:53.538 "adrfam": "IPv4", 00:21:53.538 "traddr": "10.0.0.1", 00:21:53.538 "trsvcid": "60000" 00:21:53.538 }, 00:21:53.538 "auth": { 00:21:53.538 "state": "completed", 00:21:53.538 "digest": "sha384", 00:21:53.538 "dhgroup": "ffdhe2048" 00:21:53.538 } 00:21:53.538 } 00:21:53.538 ]' 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.538 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.798 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:53.798 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:54.368 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:54.628 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:54.628 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.628 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.629 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.890 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.890 { 00:21:54.890 "cntlid": 61, 00:21:54.890 "qid": 0, 00:21:54.890 "state": "enabled", 00:21:54.890 "thread": "nvmf_tgt_poll_group_000", 00:21:54.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:54.890 "listen_address": { 00:21:54.890 "trtype": "TCP", 00:21:54.890 "adrfam": "IPv4", 00:21:54.890 "traddr": "10.0.0.2", 00:21:54.890 "trsvcid": "4420" 00:21:54.890 }, 00:21:54.890 "peer_address": { 00:21:54.890 "trtype": "TCP", 00:21:54.890 "adrfam": "IPv4", 00:21:54.890 "traddr": "10.0.0.1", 00:21:54.890 "trsvcid": "60028" 00:21:54.890 }, 00:21:54.890 "auth": { 00:21:54.890 "state": "completed", 00:21:54.890 "digest": "sha384", 00:21:54.890 "dhgroup": "ffdhe2048" 00:21:54.890 } 00:21:54.890 } 00:21:54.890 ]' 00:21:54.890 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.151 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.151 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.151 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:55.151 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.151 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.151 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.151 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.411 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:55.412 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:21:55.982 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.982 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:55.983 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.983 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.983 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.983 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.983 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:55.983 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.244 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.505 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.505 { 00:21:56.505 "cntlid": 63, 00:21:56.505 "qid": 0, 00:21:56.505 "state": "enabled", 00:21:56.505 "thread": "nvmf_tgt_poll_group_000", 00:21:56.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:56.505 "listen_address": { 00:21:56.505 "trtype": "TCP", 00:21:56.505 "adrfam": "IPv4", 00:21:56.505 "traddr": "10.0.0.2", 00:21:56.505 "trsvcid": "4420" 00:21:56.505 }, 00:21:56.505 "peer_address": { 00:21:56.505 "trtype": "TCP", 00:21:56.505 "adrfam": "IPv4", 00:21:56.505 "traddr": "10.0.0.1", 00:21:56.505 "trsvcid": "60042" 00:21:56.505 }, 00:21:56.505 "auth": { 00:21:56.505 "state": "completed", 00:21:56.505 "digest": "sha384", 00:21:56.505 "dhgroup": "ffdhe2048" 00:21:56.505 } 00:21:56.505 } 00:21:56.505 ]' 00:21:56.505 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:56.767 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.709 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.970 00:21:57.970 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.970 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.970 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.231 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.231 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.231 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.231 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.231 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.231 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.231 { 00:21:58.231 "cntlid": 65, 00:21:58.231 "qid": 0, 00:21:58.231 "state": "enabled", 00:21:58.231 "thread": "nvmf_tgt_poll_group_000", 00:21:58.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:58.231 "listen_address": { 00:21:58.231 "trtype": "TCP", 00:21:58.231 "adrfam": "IPv4", 00:21:58.231 "traddr": "10.0.0.2", 00:21:58.231 "trsvcid": "4420" 00:21:58.231 }, 00:21:58.231 "peer_address": { 00:21:58.231 "trtype": "TCP", 00:21:58.231 "adrfam": "IPv4", 00:21:58.231 "traddr": "10.0.0.1", 00:21:58.231 "trsvcid": "60082" 00:21:58.231 }, 00:21:58.231 "auth": { 00:21:58.231 "state": "completed", 00:21:58.231 "digest": "sha384", 00:21:58.231 "dhgroup": "ffdhe3072" 00:21:58.231 } 00:21:58.231 } 00:21:58.231 ]' 00:21:58.231 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.231 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.231 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.231 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.231 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.231 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.231 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.231 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.492 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:58.492 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:21:59.066 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.327 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:59.327 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.327 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.327 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.328 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.328 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.328 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.588 00:21:59.588 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.588 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.588 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.848 { 00:21:59.848 "cntlid": 67, 00:21:59.848 "qid": 0, 00:21:59.848 "state": "enabled", 00:21:59.848 "thread": "nvmf_tgt_poll_group_000", 00:21:59.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:21:59.848 "listen_address": { 00:21:59.848 "trtype": "TCP", 00:21:59.848 "adrfam": "IPv4", 00:21:59.848 "traddr": "10.0.0.2", 00:21:59.848 "trsvcid": "4420" 00:21:59.848 }, 00:21:59.848 "peer_address": { 00:21:59.848 "trtype": "TCP", 00:21:59.848 "adrfam": "IPv4", 00:21:59.848 "traddr": "10.0.0.1", 00:21:59.848 "trsvcid": "60108" 00:21:59.848 }, 00:21:59.848 "auth": { 00:21:59.848 "state": "completed", 00:21:59.848 "digest": "sha384", 00:21:59.848 "dhgroup": "ffdhe3072" 00:21:59.848 } 00:21:59.848 } 00:21:59.848 ]' 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:59.848 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.108 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.108 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.108 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.108 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:00.108 06:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.048 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.049 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.049 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.049 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.309 00:22:01.309 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.309 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.309 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.569 { 00:22:01.569 "cntlid": 69, 00:22:01.569 "qid": 0, 00:22:01.569 "state": "enabled", 00:22:01.569 "thread": "nvmf_tgt_poll_group_000", 00:22:01.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:01.569 "listen_address": { 00:22:01.569 "trtype": "TCP", 00:22:01.569 "adrfam": "IPv4", 00:22:01.569 "traddr": "10.0.0.2", 00:22:01.569 "trsvcid": "4420" 00:22:01.569 }, 00:22:01.569 "peer_address": { 00:22:01.569 "trtype": "TCP", 00:22:01.569 "adrfam": "IPv4", 00:22:01.569 "traddr": "10.0.0.1", 00:22:01.569 "trsvcid": "60120" 00:22:01.569 }, 00:22:01.569 "auth": { 00:22:01.569 "state": "completed", 00:22:01.569 "digest": "sha384", 00:22:01.569 "dhgroup": "ffdhe3072" 00:22:01.569 } 00:22:01.569 } 00:22:01.569 ]' 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.569 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.830 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:01.830 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:02.401 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.661 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.922 00:22:02.922 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.922 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.922 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.183 { 00:22:03.183 "cntlid": 71, 00:22:03.183 "qid": 0, 00:22:03.183 "state": "enabled", 00:22:03.183 "thread": "nvmf_tgt_poll_group_000", 00:22:03.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:03.183 "listen_address": { 00:22:03.183 "trtype": "TCP", 00:22:03.183 "adrfam": "IPv4", 00:22:03.183 "traddr": "10.0.0.2", 00:22:03.183 "trsvcid": "4420" 00:22:03.183 }, 00:22:03.183 "peer_address": { 00:22:03.183 "trtype": "TCP", 00:22:03.183 "adrfam": "IPv4", 00:22:03.183 "traddr": "10.0.0.1", 00:22:03.183 "trsvcid": "60142" 00:22:03.183 }, 00:22:03.183 "auth": { 00:22:03.183 "state": "completed", 00:22:03.183 "digest": "sha384", 00:22:03.183 "dhgroup": "ffdhe3072" 00:22:03.183 } 00:22:03.183 } 00:22:03.183 ]' 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:03.183 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.183 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.183 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.183 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.444 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:03.444 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.013 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.273 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.274 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.534 00:22:04.534 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.534 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.534 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.795 { 00:22:04.795 "cntlid": 73, 00:22:04.795 "qid": 0, 00:22:04.795 "state": "enabled", 00:22:04.795 "thread": "nvmf_tgt_poll_group_000", 00:22:04.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:04.795 "listen_address": { 00:22:04.795 "trtype": "TCP", 00:22:04.795 "adrfam": "IPv4", 00:22:04.795 "traddr": "10.0.0.2", 00:22:04.795 "trsvcid": "4420" 00:22:04.795 }, 00:22:04.795 "peer_address": { 00:22:04.795 "trtype": "TCP", 00:22:04.795 "adrfam": "IPv4", 00:22:04.795 "traddr": "10.0.0.1", 00:22:04.795 "trsvcid": "60312" 00:22:04.795 }, 00:22:04.795 "auth": { 00:22:04.795 "state": "completed", 00:22:04.795 "digest": "sha384", 00:22:04.795 "dhgroup": "ffdhe4096" 00:22:04.795 } 00:22:04.795 } 00:22:04.795 ]' 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.795 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.056 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:05.056 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.626 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.886 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.146 00:22:06.146 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.146 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.146 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.407 { 00:22:06.407 "cntlid": 75, 00:22:06.407 "qid": 0, 00:22:06.407 "state": "enabled", 00:22:06.407 "thread": "nvmf_tgt_poll_group_000", 00:22:06.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:06.407 "listen_address": { 00:22:06.407 "trtype": "TCP", 00:22:06.407 "adrfam": "IPv4", 00:22:06.407 "traddr": "10.0.0.2", 00:22:06.407 "trsvcid": "4420" 00:22:06.407 }, 00:22:06.407 "peer_address": { 00:22:06.407 "trtype": "TCP", 00:22:06.407 "adrfam": "IPv4", 00:22:06.407 "traddr": "10.0.0.1", 00:22:06.407 "trsvcid": "60334" 00:22:06.407 }, 00:22:06.407 "auth": { 00:22:06.407 "state": "completed", 00:22:06.407 "digest": "sha384", 00:22:06.407 "dhgroup": "ffdhe4096" 00:22:06.407 } 00:22:06.407 } 00:22:06.407 ]' 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.407 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.668 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:06.668 06:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:07.240 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.240 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:07.240 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.240 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.501 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.761 00:22:07.761 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.761 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.761 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.021 { 00:22:08.021 "cntlid": 77, 00:22:08.021 "qid": 0, 00:22:08.021 "state": "enabled", 00:22:08.021 "thread": "nvmf_tgt_poll_group_000", 00:22:08.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:08.021 "listen_address": { 00:22:08.021 "trtype": "TCP", 00:22:08.021 "adrfam": "IPv4", 00:22:08.021 "traddr": "10.0.0.2", 00:22:08.021 "trsvcid": "4420" 00:22:08.021 }, 00:22:08.021 "peer_address": { 00:22:08.021 "trtype": "TCP", 00:22:08.021 "adrfam": "IPv4", 00:22:08.021 "traddr": "10.0.0.1", 00:22:08.021 "trsvcid": "60358" 00:22:08.021 }, 00:22:08.021 "auth": { 00:22:08.021 "state": "completed", 00:22:08.021 "digest": "sha384", 00:22:08.021 "dhgroup": "ffdhe4096" 00:22:08.021 } 00:22:08.021 } 00:22:08.021 ]' 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:08.021 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.283 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.283 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.283 06:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.283 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:08.283 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.225 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.485 00:22:09.485 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.485 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.485 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.745 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.745 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.745 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.745 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.745 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.745 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.745 { 00:22:09.745 "cntlid": 79, 00:22:09.745 "qid": 0, 00:22:09.745 "state": "enabled", 00:22:09.745 "thread": "nvmf_tgt_poll_group_000", 00:22:09.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:09.745 "listen_address": { 00:22:09.745 "trtype": "TCP", 00:22:09.745 "adrfam": "IPv4", 00:22:09.746 "traddr": "10.0.0.2", 00:22:09.746 "trsvcid": "4420" 00:22:09.746 }, 00:22:09.746 "peer_address": { 00:22:09.746 "trtype": "TCP", 00:22:09.746 "adrfam": "IPv4", 00:22:09.746 "traddr": "10.0.0.1", 00:22:09.746 "trsvcid": "60374" 00:22:09.746 }, 00:22:09.746 "auth": { 00:22:09.746 "state": "completed", 00:22:09.746 "digest": "sha384", 00:22:09.746 "dhgroup": "ffdhe4096" 00:22:09.746 } 00:22:09.746 } 00:22:09.746 ]' 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.746 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.006 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:10.006 06:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:10.578 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.838 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.097 00:22:11.097 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.097 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.097 06:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.357 { 00:22:11.357 "cntlid": 81, 00:22:11.357 "qid": 0, 00:22:11.357 "state": "enabled", 00:22:11.357 "thread": "nvmf_tgt_poll_group_000", 00:22:11.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:11.357 "listen_address": { 00:22:11.357 "trtype": "TCP", 00:22:11.357 "adrfam": "IPv4", 00:22:11.357 "traddr": "10.0.0.2", 00:22:11.357 "trsvcid": "4420" 00:22:11.357 }, 00:22:11.357 "peer_address": { 00:22:11.357 "trtype": "TCP", 00:22:11.357 "adrfam": "IPv4", 00:22:11.357 "traddr": "10.0.0.1", 00:22:11.357 "trsvcid": "60404" 00:22:11.357 }, 00:22:11.357 "auth": { 00:22:11.357 "state": "completed", 00:22:11.357 "digest": "sha384", 00:22:11.357 "dhgroup": "ffdhe6144" 00:22:11.357 } 00:22:11.357 } 00:22:11.357 ]' 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.357 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.619 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.619 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.619 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.619 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:11.619 06:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.564 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.825 00:22:12.825 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.825 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.825 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.086 { 00:22:13.086 "cntlid": 83, 00:22:13.086 "qid": 0, 00:22:13.086 "state": "enabled", 00:22:13.086 "thread": "nvmf_tgt_poll_group_000", 00:22:13.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:13.086 "listen_address": { 00:22:13.086 "trtype": "TCP", 00:22:13.086 "adrfam": "IPv4", 00:22:13.086 "traddr": "10.0.0.2", 00:22:13.086 "trsvcid": "4420" 00:22:13.086 }, 00:22:13.086 "peer_address": { 00:22:13.086 "trtype": "TCP", 00:22:13.086 "adrfam": "IPv4", 00:22:13.086 "traddr": "10.0.0.1", 00:22:13.086 "trsvcid": "60438" 00:22:13.086 }, 00:22:13.086 "auth": { 00:22:13.086 "state": "completed", 00:22:13.086 "digest": "sha384", 00:22:13.086 "dhgroup": "ffdhe6144" 00:22:13.086 } 00:22:13.086 } 00:22:13.086 ]' 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.086 06:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.347 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.347 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.347 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.347 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:13.347 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:13.919 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.180 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:14.180 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.180 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.180 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.180 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.180 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.180 06:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.180 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.752 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.752 { 00:22:14.752 "cntlid": 85, 00:22:14.752 "qid": 0, 00:22:14.752 "state": "enabled", 00:22:14.752 "thread": "nvmf_tgt_poll_group_000", 00:22:14.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:14.752 "listen_address": { 00:22:14.752 "trtype": "TCP", 00:22:14.752 "adrfam": "IPv4", 00:22:14.752 "traddr": "10.0.0.2", 00:22:14.752 "trsvcid": "4420" 00:22:14.752 }, 00:22:14.752 "peer_address": { 00:22:14.752 "trtype": "TCP", 00:22:14.752 "adrfam": "IPv4", 00:22:14.752 "traddr": "10.0.0.1", 00:22:14.752 "trsvcid": "33154" 00:22:14.752 }, 00:22:14.752 "auth": { 00:22:14.752 "state": "completed", 00:22:14.752 "digest": "sha384", 00:22:14.752 "dhgroup": "ffdhe6144" 00:22:14.752 } 00:22:14.752 } 00:22:14.752 ]' 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:14.752 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.013 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.013 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.013 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.013 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:15.013 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.956 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.217 00:22:16.217 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.217 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.217 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.479 { 00:22:16.479 "cntlid": 87, 00:22:16.479 "qid": 0, 00:22:16.479 "state": "enabled", 00:22:16.479 "thread": "nvmf_tgt_poll_group_000", 00:22:16.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:16.479 "listen_address": { 00:22:16.479 "trtype": "TCP", 00:22:16.479 "adrfam": "IPv4", 00:22:16.479 "traddr": "10.0.0.2", 00:22:16.479 "trsvcid": "4420" 00:22:16.479 }, 00:22:16.479 "peer_address": { 00:22:16.479 "trtype": "TCP", 00:22:16.479 "adrfam": "IPv4", 00:22:16.479 "traddr": "10.0.0.1", 00:22:16.479 "trsvcid": "33184" 00:22:16.479 }, 00:22:16.479 "auth": { 00:22:16.479 "state": "completed", 00:22:16.479 "digest": "sha384", 00:22:16.479 "dhgroup": "ffdhe6144" 00:22:16.479 } 00:22:16.479 } 00:22:16.479 ]' 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.479 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.740 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:16.740 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:17.311 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.572 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.573 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.145 00:22:18.145 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.145 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.145 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.406 { 00:22:18.406 "cntlid": 89, 00:22:18.406 "qid": 0, 00:22:18.406 "state": "enabled", 00:22:18.406 "thread": "nvmf_tgt_poll_group_000", 00:22:18.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:18.406 "listen_address": { 00:22:18.406 "trtype": "TCP", 00:22:18.406 "adrfam": "IPv4", 00:22:18.406 "traddr": "10.0.0.2", 00:22:18.406 "trsvcid": "4420" 00:22:18.406 }, 00:22:18.406 "peer_address": { 00:22:18.406 "trtype": "TCP", 00:22:18.406 "adrfam": "IPv4", 00:22:18.406 "traddr": "10.0.0.1", 00:22:18.406 "trsvcid": "33216" 00:22:18.406 }, 00:22:18.406 "auth": { 00:22:18.406 "state": "completed", 00:22:18.406 "digest": "sha384", 00:22:18.406 "dhgroup": "ffdhe8192" 00:22:18.406 } 00:22:18.406 } 00:22:18.406 ]' 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.406 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.667 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:18.667 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:19.238 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.238 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:19.238 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.238 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.238 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.238 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.239 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:19.239 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:19.505 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:19.505 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.505 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.506 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.080 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.080 { 00:22:20.080 "cntlid": 91, 00:22:20.080 "qid": 0, 00:22:20.080 "state": "enabled", 00:22:20.080 "thread": "nvmf_tgt_poll_group_000", 00:22:20.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:20.080 "listen_address": { 00:22:20.080 "trtype": "TCP", 00:22:20.080 "adrfam": "IPv4", 00:22:20.080 "traddr": "10.0.0.2", 00:22:20.080 "trsvcid": "4420" 00:22:20.080 }, 00:22:20.080 "peer_address": { 00:22:20.080 "trtype": "TCP", 00:22:20.080 "adrfam": "IPv4", 00:22:20.080 "traddr": "10.0.0.1", 00:22:20.080 "trsvcid": "33246" 00:22:20.080 }, 00:22:20.080 "auth": { 00:22:20.080 "state": "completed", 00:22:20.080 "digest": "sha384", 00:22:20.080 "dhgroup": "ffdhe8192" 00:22:20.080 } 00:22:20.080 } 00:22:20.080 ]' 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.080 06:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.340 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.341 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.341 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.341 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.341 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.341 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:20.341 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:21.284 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.284 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.856 00:22:21.856 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.856 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.856 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.117 { 00:22:22.117 "cntlid": 93, 00:22:22.117 "qid": 0, 00:22:22.117 "state": "enabled", 00:22:22.117 "thread": "nvmf_tgt_poll_group_000", 00:22:22.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:22.117 "listen_address": { 00:22:22.117 "trtype": "TCP", 00:22:22.117 "adrfam": "IPv4", 00:22:22.117 "traddr": "10.0.0.2", 00:22:22.117 "trsvcid": "4420" 00:22:22.117 }, 00:22:22.117 "peer_address": { 00:22:22.117 "trtype": "TCP", 00:22:22.117 "adrfam": "IPv4", 00:22:22.117 "traddr": "10.0.0.1", 00:22:22.117 "trsvcid": "33272" 00:22:22.117 }, 00:22:22.117 "auth": { 00:22:22.117 "state": "completed", 00:22:22.117 "digest": "sha384", 00:22:22.117 "dhgroup": "ffdhe8192" 00:22:22.117 } 00:22:22.117 } 00:22:22.117 ]' 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.117 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.379 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:22.379 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:22.950 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.211 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.212 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.212 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.782 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.782 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.782 { 00:22:23.782 "cntlid": 95, 00:22:23.783 "qid": 0, 00:22:23.783 "state": "enabled", 00:22:23.783 "thread": "nvmf_tgt_poll_group_000", 00:22:23.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:23.783 "listen_address": { 00:22:23.783 "trtype": "TCP", 00:22:23.783 "adrfam": "IPv4", 00:22:23.783 "traddr": "10.0.0.2", 00:22:23.783 "trsvcid": "4420" 00:22:23.783 }, 00:22:23.783 "peer_address": { 00:22:23.783 "trtype": "TCP", 00:22:23.783 "adrfam": "IPv4", 00:22:23.783 "traddr": "10.0.0.1", 00:22:23.783 "trsvcid": "57734" 00:22:23.783 }, 00:22:23.783 "auth": { 00:22:23.783 "state": "completed", 00:22:23.783 "digest": "sha384", 00:22:23.783 "dhgroup": "ffdhe8192" 00:22:23.783 } 00:22:23.783 } 00:22:23.783 ]' 00:22:23.783 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.043 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.043 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.043 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.043 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.043 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.043 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.043 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.303 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:24.303 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:24.873 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.134 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.135 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.135 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.135 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.135 00:22:25.135 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.135 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.135 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.397 { 00:22:25.397 "cntlid": 97, 00:22:25.397 "qid": 0, 00:22:25.397 "state": "enabled", 00:22:25.397 "thread": "nvmf_tgt_poll_group_000", 00:22:25.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:25.397 "listen_address": { 00:22:25.397 "trtype": "TCP", 00:22:25.397 "adrfam": "IPv4", 00:22:25.397 "traddr": "10.0.0.2", 00:22:25.397 "trsvcid": "4420" 00:22:25.397 }, 00:22:25.397 "peer_address": { 00:22:25.397 "trtype": "TCP", 00:22:25.397 "adrfam": "IPv4", 00:22:25.397 "traddr": "10.0.0.1", 00:22:25.397 "trsvcid": "57756" 00:22:25.397 }, 00:22:25.397 "auth": { 00:22:25.397 "state": "completed", 00:22:25.397 "digest": "sha512", 00:22:25.397 "dhgroup": "null" 00:22:25.397 } 00:22:25.397 } 00:22:25.397 ]' 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.397 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.659 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:25.659 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.659 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.659 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.659 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.659 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:25.659 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:26.601 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.602 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.863 00:22:26.863 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.863 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.863 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.124 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.124 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.124 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.124 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.124 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.124 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.124 { 00:22:27.124 "cntlid": 99, 00:22:27.125 "qid": 0, 00:22:27.125 "state": "enabled", 00:22:27.125 "thread": "nvmf_tgt_poll_group_000", 00:22:27.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:27.125 "listen_address": { 00:22:27.125 "trtype": "TCP", 00:22:27.125 "adrfam": "IPv4", 00:22:27.125 "traddr": "10.0.0.2", 00:22:27.125 "trsvcid": "4420" 00:22:27.125 }, 00:22:27.125 "peer_address": { 00:22:27.125 "trtype": "TCP", 00:22:27.125 "adrfam": "IPv4", 00:22:27.125 "traddr": "10.0.0.1", 00:22:27.125 "trsvcid": "57788" 00:22:27.125 }, 00:22:27.125 "auth": { 00:22:27.125 "state": "completed", 00:22:27.125 "digest": "sha512", 00:22:27.125 "dhgroup": "null" 00:22:27.125 } 00:22:27.125 } 00:22:27.125 ]' 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.125 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.385 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:27.386 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:27.956 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.956 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:27.956 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.956 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.956 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.956 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.956 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:27.957 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.217 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.478 00:22:28.478 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.478 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.478 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.739 { 00:22:28.739 "cntlid": 101, 00:22:28.739 "qid": 0, 00:22:28.739 "state": "enabled", 00:22:28.739 "thread": "nvmf_tgt_poll_group_000", 00:22:28.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:28.739 "listen_address": { 00:22:28.739 "trtype": "TCP", 00:22:28.739 "adrfam": "IPv4", 00:22:28.739 "traddr": "10.0.0.2", 00:22:28.739 "trsvcid": "4420" 00:22:28.739 }, 00:22:28.739 "peer_address": { 00:22:28.739 "trtype": "TCP", 00:22:28.739 "adrfam": "IPv4", 00:22:28.739 "traddr": "10.0.0.1", 00:22:28.739 "trsvcid": "57802" 00:22:28.739 }, 00:22:28.739 "auth": { 00:22:28.739 "state": "completed", 00:22:28.739 "digest": "sha512", 00:22:28.739 "dhgroup": "null" 00:22:28.739 } 00:22:28.739 } 00:22:28.739 ]' 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.739 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.001 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:29.001 06:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:29.570 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:29.830 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.831 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.831 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.831 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:29.831 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.831 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.091 00:22:30.091 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.091 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.091 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.352 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.352 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.352 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.352 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.352 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.352 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.352 { 00:22:30.352 "cntlid": 103, 00:22:30.352 "qid": 0, 00:22:30.352 "state": "enabled", 00:22:30.352 "thread": "nvmf_tgt_poll_group_000", 00:22:30.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:30.352 "listen_address": { 00:22:30.352 "trtype": "TCP", 00:22:30.352 "adrfam": "IPv4", 00:22:30.352 "traddr": "10.0.0.2", 00:22:30.352 "trsvcid": "4420" 00:22:30.352 }, 00:22:30.352 "peer_address": { 00:22:30.353 "trtype": "TCP", 00:22:30.353 "adrfam": "IPv4", 00:22:30.353 "traddr": "10.0.0.1", 00:22:30.353 "trsvcid": "57828" 00:22:30.353 }, 00:22:30.353 "auth": { 00:22:30.353 "state": "completed", 00:22:30.353 "digest": "sha512", 00:22:30.353 "dhgroup": "null" 00:22:30.353 } 00:22:30.353 } 00:22:30.353 ]' 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.353 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.615 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:30.615 06:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:31.186 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.448 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.737 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.737 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.737 { 00:22:31.737 "cntlid": 105, 00:22:31.737 "qid": 0, 00:22:31.737 "state": "enabled", 00:22:31.737 "thread": "nvmf_tgt_poll_group_000", 00:22:31.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:31.737 "listen_address": { 00:22:31.737 "trtype": "TCP", 00:22:31.737 "adrfam": "IPv4", 00:22:31.737 "traddr": "10.0.0.2", 00:22:31.737 "trsvcid": "4420" 00:22:31.737 }, 00:22:31.737 "peer_address": { 00:22:31.737 "trtype": "TCP", 00:22:31.737 "adrfam": "IPv4", 00:22:31.737 "traddr": "10.0.0.1", 00:22:31.737 "trsvcid": "57856" 00:22:31.737 }, 00:22:31.737 "auth": { 00:22:31.737 "state": "completed", 00:22:31.737 "digest": "sha512", 00:22:31.737 "dhgroup": "ffdhe2048" 00:22:31.737 } 00:22:31.737 } 00:22:31.737 ]' 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.999 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.260 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:32.260 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:32.829 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.830 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:32.830 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.830 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.830 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.830 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.830 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:32.830 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.090 06:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.350 00:22:33.350 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.350 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.350 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.350 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.350 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.350 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.350 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.609 { 00:22:33.609 "cntlid": 107, 00:22:33.609 "qid": 0, 00:22:33.609 "state": "enabled", 00:22:33.609 "thread": "nvmf_tgt_poll_group_000", 00:22:33.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:33.609 "listen_address": { 00:22:33.609 "trtype": "TCP", 00:22:33.609 "adrfam": "IPv4", 00:22:33.609 "traddr": "10.0.0.2", 00:22:33.609 "trsvcid": "4420" 00:22:33.609 }, 00:22:33.609 "peer_address": { 00:22:33.609 "trtype": "TCP", 00:22:33.609 "adrfam": "IPv4", 00:22:33.609 "traddr": "10.0.0.1", 00:22:33.609 "trsvcid": "60364" 00:22:33.609 }, 00:22:33.609 "auth": { 00:22:33.609 "state": "completed", 00:22:33.609 "digest": "sha512", 00:22:33.609 "dhgroup": "ffdhe2048" 00:22:33.609 } 00:22:33.609 } 00:22:33.609 ]' 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.609 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.869 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:33.869 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.443 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.704 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.705 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.705 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.705 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.966 00:22:34.966 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.966 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.966 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.966 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.966 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.966 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.966 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.227 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.227 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.227 { 00:22:35.227 "cntlid": 109, 00:22:35.227 "qid": 0, 00:22:35.227 "state": "enabled", 00:22:35.227 "thread": "nvmf_tgt_poll_group_000", 00:22:35.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:35.227 "listen_address": { 00:22:35.227 "trtype": "TCP", 00:22:35.227 "adrfam": "IPv4", 00:22:35.227 "traddr": "10.0.0.2", 00:22:35.227 "trsvcid": "4420" 00:22:35.227 }, 00:22:35.227 "peer_address": { 00:22:35.227 "trtype": "TCP", 00:22:35.227 "adrfam": "IPv4", 00:22:35.227 "traddr": "10.0.0.1", 00:22:35.227 "trsvcid": "60400" 00:22:35.227 }, 00:22:35.227 "auth": { 00:22:35.227 "state": "completed", 00:22:35.227 "digest": "sha512", 00:22:35.227 "dhgroup": "ffdhe2048" 00:22:35.227 } 00:22:35.227 } 00:22:35.227 ]' 00:22:35.227 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.227 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.227 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.227 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:35.227 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.227 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.227 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.227 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.489 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:35.489 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.061 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:36.321 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.322 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.322 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.322 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.322 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.322 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.582 00:22:36.582 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.582 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.582 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.582 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.582 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.582 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.582 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.843 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.843 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.843 { 00:22:36.843 "cntlid": 111, 00:22:36.843 "qid": 0, 00:22:36.843 "state": "enabled", 00:22:36.843 "thread": "nvmf_tgt_poll_group_000", 00:22:36.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:36.843 "listen_address": { 00:22:36.843 "trtype": "TCP", 00:22:36.843 "adrfam": "IPv4", 00:22:36.843 "traddr": "10.0.0.2", 00:22:36.844 "trsvcid": "4420" 00:22:36.844 }, 00:22:36.844 "peer_address": { 00:22:36.844 "trtype": "TCP", 00:22:36.844 "adrfam": "IPv4", 00:22:36.844 "traddr": "10.0.0.1", 00:22:36.844 "trsvcid": "60426" 00:22:36.844 }, 00:22:36.844 "auth": { 00:22:36.844 "state": "completed", 00:22:36.844 "digest": "sha512", 00:22:36.844 "dhgroup": "ffdhe2048" 00:22:36.844 } 00:22:36.844 } 00:22:36.844 ]' 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.844 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.104 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:37.104 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:37.676 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.937 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.198 00:22:38.198 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.198 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.198 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.198 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.198 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.198 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.458 { 00:22:38.458 "cntlid": 113, 00:22:38.458 "qid": 0, 00:22:38.458 "state": "enabled", 00:22:38.458 "thread": "nvmf_tgt_poll_group_000", 00:22:38.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:38.458 "listen_address": { 00:22:38.458 "trtype": "TCP", 00:22:38.458 "adrfam": "IPv4", 00:22:38.458 "traddr": "10.0.0.2", 00:22:38.458 "trsvcid": "4420" 00:22:38.458 }, 00:22:38.458 "peer_address": { 00:22:38.458 "trtype": "TCP", 00:22:38.458 "adrfam": "IPv4", 00:22:38.458 "traddr": "10.0.0.1", 00:22:38.458 "trsvcid": "60460" 00:22:38.458 }, 00:22:38.458 "auth": { 00:22:38.458 "state": "completed", 00:22:38.458 "digest": "sha512", 00:22:38.458 "dhgroup": "ffdhe3072" 00:22:38.458 } 00:22:38.458 } 00:22:38.458 ]' 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.458 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.718 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:38.718 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:39.289 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.550 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.810 00:22:39.810 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.810 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.810 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.810 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.810 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.810 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.810 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.070 { 00:22:40.070 "cntlid": 115, 00:22:40.070 "qid": 0, 00:22:40.070 "state": "enabled", 00:22:40.070 "thread": "nvmf_tgt_poll_group_000", 00:22:40.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:40.070 "listen_address": { 00:22:40.070 "trtype": "TCP", 00:22:40.070 "adrfam": "IPv4", 00:22:40.070 "traddr": "10.0.0.2", 00:22:40.070 "trsvcid": "4420" 00:22:40.070 }, 00:22:40.070 "peer_address": { 00:22:40.070 "trtype": "TCP", 00:22:40.070 "adrfam": "IPv4", 00:22:40.070 "traddr": "10.0.0.1", 00:22:40.070 "trsvcid": "60480" 00:22:40.070 }, 00:22:40.070 "auth": { 00:22:40.070 "state": "completed", 00:22:40.070 "digest": "sha512", 00:22:40.070 "dhgroup": "ffdhe3072" 00:22:40.070 } 00:22:40.070 } 00:22:40.070 ]' 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.070 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.331 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:40.331 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:40.903 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.164 06:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.425 00:22:41.425 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.425 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.425 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.425 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.425 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.425 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.425 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.686 { 00:22:41.686 "cntlid": 117, 00:22:41.686 "qid": 0, 00:22:41.686 "state": "enabled", 00:22:41.686 "thread": "nvmf_tgt_poll_group_000", 00:22:41.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:41.686 "listen_address": { 00:22:41.686 "trtype": "TCP", 00:22:41.686 "adrfam": "IPv4", 00:22:41.686 "traddr": "10.0.0.2", 00:22:41.686 "trsvcid": "4420" 00:22:41.686 }, 00:22:41.686 "peer_address": { 00:22:41.686 "trtype": "TCP", 00:22:41.686 "adrfam": "IPv4", 00:22:41.686 "traddr": "10.0.0.1", 00:22:41.686 "trsvcid": "60494" 00:22:41.686 }, 00:22:41.686 "auth": { 00:22:41.686 "state": "completed", 00:22:41.686 "digest": "sha512", 00:22:41.686 "dhgroup": "ffdhe3072" 00:22:41.686 } 00:22:41.686 } 00:22:41.686 ]' 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.686 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.947 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:41.947 06:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.519 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.780 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.040 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.040 { 00:22:43.040 "cntlid": 119, 00:22:43.040 "qid": 0, 00:22:43.040 "state": "enabled", 00:22:43.040 "thread": "nvmf_tgt_poll_group_000", 00:22:43.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:43.040 "listen_address": { 00:22:43.040 "trtype": "TCP", 00:22:43.040 "adrfam": "IPv4", 00:22:43.040 "traddr": "10.0.0.2", 00:22:43.040 "trsvcid": "4420" 00:22:43.040 }, 00:22:43.040 "peer_address": { 00:22:43.040 "trtype": "TCP", 00:22:43.040 "adrfam": "IPv4", 00:22:43.040 "traddr": "10.0.0.1", 00:22:43.040 "trsvcid": "60510" 00:22:43.040 }, 00:22:43.040 "auth": { 00:22:43.040 "state": "completed", 00:22:43.040 "digest": "sha512", 00:22:43.040 "dhgroup": "ffdhe3072" 00:22:43.040 } 00:22:43.040 } 00:22:43.040 ]' 00:22:43.040 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.301 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.301 06:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.301 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.301 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.301 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.301 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.301 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.562 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:43.562 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:44.132 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.132 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:44.132 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.132 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.133 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.133 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.133 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.133 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.133 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.393 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.654 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.654 { 00:22:44.654 "cntlid": 121, 00:22:44.654 "qid": 0, 00:22:44.654 "state": "enabled", 00:22:44.654 "thread": "nvmf_tgt_poll_group_000", 00:22:44.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:44.654 "listen_address": { 00:22:44.654 "trtype": "TCP", 00:22:44.654 "adrfam": "IPv4", 00:22:44.654 "traddr": "10.0.0.2", 00:22:44.654 "trsvcid": "4420" 00:22:44.654 }, 00:22:44.654 "peer_address": { 00:22:44.654 "trtype": "TCP", 00:22:44.654 "adrfam": "IPv4", 00:22:44.654 "traddr": "10.0.0.1", 00:22:44.654 "trsvcid": "57210" 00:22:44.654 }, 00:22:44.654 "auth": { 00:22:44.654 "state": "completed", 00:22:44.654 "digest": "sha512", 00:22:44.654 "dhgroup": "ffdhe4096" 00:22:44.654 } 00:22:44.654 } 00:22:44.654 ]' 00:22:44.654 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.914 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.914 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.914 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:44.914 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.914 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.914 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.914 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.175 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:45.175 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:45.745 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.005 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.266 00:22:46.266 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.266 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.266 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.526 { 00:22:46.526 "cntlid": 123, 00:22:46.526 "qid": 0, 00:22:46.526 "state": "enabled", 00:22:46.526 "thread": "nvmf_tgt_poll_group_000", 00:22:46.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:46.526 "listen_address": { 00:22:46.526 "trtype": "TCP", 00:22:46.526 "adrfam": "IPv4", 00:22:46.526 "traddr": "10.0.0.2", 00:22:46.526 "trsvcid": "4420" 00:22:46.526 }, 00:22:46.526 "peer_address": { 00:22:46.526 "trtype": "TCP", 00:22:46.526 "adrfam": "IPv4", 00:22:46.526 "traddr": "10.0.0.1", 00:22:46.526 "trsvcid": "57240" 00:22:46.526 }, 00:22:46.526 "auth": { 00:22:46.526 "state": "completed", 00:22:46.526 "digest": "sha512", 00:22:46.526 "dhgroup": "ffdhe4096" 00:22:46.526 } 00:22:46.526 } 00:22:46.526 ]' 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.526 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.786 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:46.786 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:47.356 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.356 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:47.356 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.356 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.356 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.356 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.356 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:47.357 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:47.633 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:47.633 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.633 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.633 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:47.633 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:47.633 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.633 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.634 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.634 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.634 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.634 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.634 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.634 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.904 00:22:47.904 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.904 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.904 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.165 { 00:22:48.165 "cntlid": 125, 00:22:48.165 "qid": 0, 00:22:48.165 "state": "enabled", 00:22:48.165 "thread": "nvmf_tgt_poll_group_000", 00:22:48.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:48.165 "listen_address": { 00:22:48.165 "trtype": "TCP", 00:22:48.165 "adrfam": "IPv4", 00:22:48.165 "traddr": "10.0.0.2", 00:22:48.165 "trsvcid": "4420" 00:22:48.165 }, 00:22:48.165 "peer_address": { 00:22:48.165 "trtype": "TCP", 00:22:48.165 "adrfam": "IPv4", 00:22:48.165 "traddr": "10.0.0.1", 00:22:48.165 "trsvcid": "57258" 00:22:48.165 }, 00:22:48.165 "auth": { 00:22:48.165 "state": "completed", 00:22:48.165 "digest": "sha512", 00:22:48.165 "dhgroup": "ffdhe4096" 00:22:48.165 } 00:22:48.165 } 00:22:48.165 ]' 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.165 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.426 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:48.426 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:48.997 06:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.259 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.520 00:22:49.520 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.520 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.520 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.780 { 00:22:49.780 "cntlid": 127, 00:22:49.780 "qid": 0, 00:22:49.780 "state": "enabled", 00:22:49.780 "thread": "nvmf_tgt_poll_group_000", 00:22:49.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:49.780 "listen_address": { 00:22:49.780 "trtype": "TCP", 00:22:49.780 "adrfam": "IPv4", 00:22:49.780 "traddr": "10.0.0.2", 00:22:49.780 "trsvcid": "4420" 00:22:49.780 }, 00:22:49.780 "peer_address": { 00:22:49.780 "trtype": "TCP", 00:22:49.780 "adrfam": "IPv4", 00:22:49.780 "traddr": "10.0.0.1", 00:22:49.780 "trsvcid": "57290" 00:22:49.780 }, 00:22:49.780 "auth": { 00:22:49.780 "state": "completed", 00:22:49.780 "digest": "sha512", 00:22:49.780 "dhgroup": "ffdhe4096" 00:22:49.780 } 00:22:49.780 } 00:22:49.780 ]' 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.780 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.041 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:50.041 06:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.613 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.880 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.881 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.169 00:22:51.169 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.169 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.169 06:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.459 { 00:22:51.459 "cntlid": 129, 00:22:51.459 "qid": 0, 00:22:51.459 "state": "enabled", 00:22:51.459 "thread": "nvmf_tgt_poll_group_000", 00:22:51.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:51.459 "listen_address": { 00:22:51.459 "trtype": "TCP", 00:22:51.459 "adrfam": "IPv4", 00:22:51.459 "traddr": "10.0.0.2", 00:22:51.459 "trsvcid": "4420" 00:22:51.459 }, 00:22:51.459 "peer_address": { 00:22:51.459 "trtype": "TCP", 00:22:51.459 "adrfam": "IPv4", 00:22:51.459 "traddr": "10.0.0.1", 00:22:51.459 "trsvcid": "57332" 00:22:51.459 }, 00:22:51.459 "auth": { 00:22:51.459 "state": "completed", 00:22:51.459 "digest": "sha512", 00:22:51.459 "dhgroup": "ffdhe6144" 00:22:51.459 } 00:22:51.459 } 00:22:51.459 ]' 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.459 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.767 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:51.767 06:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:52.390 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.390 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:52.390 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.390 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.390 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.391 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.391 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.391 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.652 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.913 00:22:52.913 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.913 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.913 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.175 { 00:22:53.175 "cntlid": 131, 00:22:53.175 "qid": 0, 00:22:53.175 "state": "enabled", 00:22:53.175 "thread": "nvmf_tgt_poll_group_000", 00:22:53.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:53.175 "listen_address": { 00:22:53.175 "trtype": "TCP", 00:22:53.175 "adrfam": "IPv4", 00:22:53.175 "traddr": "10.0.0.2", 00:22:53.175 "trsvcid": "4420" 00:22:53.175 }, 00:22:53.175 "peer_address": { 00:22:53.175 "trtype": "TCP", 00:22:53.175 "adrfam": "IPv4", 00:22:53.175 "traddr": "10.0.0.1", 00:22:53.175 "trsvcid": "57366" 00:22:53.175 }, 00:22:53.175 "auth": { 00:22:53.175 "state": "completed", 00:22:53.175 "digest": "sha512", 00:22:53.175 "dhgroup": "ffdhe6144" 00:22:53.175 } 00:22:53.175 } 00:22:53.175 ]' 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:53.175 06:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.175 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.175 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.175 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.436 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:53.436 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:54.007 06:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.268 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.528 00:22:54.528 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.528 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.528 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.789 { 00:22:54.789 "cntlid": 133, 00:22:54.789 "qid": 0, 00:22:54.789 "state": "enabled", 00:22:54.789 "thread": "nvmf_tgt_poll_group_000", 00:22:54.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:54.789 "listen_address": { 00:22:54.789 "trtype": "TCP", 00:22:54.789 "adrfam": "IPv4", 00:22:54.789 "traddr": "10.0.0.2", 00:22:54.789 "trsvcid": "4420" 00:22:54.789 }, 00:22:54.789 "peer_address": { 00:22:54.789 "trtype": "TCP", 00:22:54.789 "adrfam": "IPv4", 00:22:54.789 "traddr": "10.0.0.1", 00:22:54.789 "trsvcid": "58274" 00:22:54.789 }, 00:22:54.789 "auth": { 00:22:54.789 "state": "completed", 00:22:54.789 "digest": "sha512", 00:22:54.789 "dhgroup": "ffdhe6144" 00:22:54.789 } 00:22:54.789 } 00:22:54.789 ]' 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:54.789 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:55.049 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:55.998 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.259 00:22:56.259 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.259 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.259 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.519 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.519 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.519 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.519 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.519 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.520 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.520 { 00:22:56.520 "cntlid": 135, 00:22:56.520 "qid": 0, 00:22:56.520 "state": "enabled", 00:22:56.520 "thread": "nvmf_tgt_poll_group_000", 00:22:56.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:56.520 "listen_address": { 00:22:56.520 "trtype": "TCP", 00:22:56.520 "adrfam": "IPv4", 00:22:56.520 "traddr": "10.0.0.2", 00:22:56.520 "trsvcid": "4420" 00:22:56.520 }, 00:22:56.520 "peer_address": { 00:22:56.520 "trtype": "TCP", 00:22:56.520 "adrfam": "IPv4", 00:22:56.520 "traddr": "10.0.0.1", 00:22:56.520 "trsvcid": "58292" 00:22:56.520 }, 00:22:56.520 "auth": { 00:22:56.520 "state": "completed", 00:22:56.520 "digest": "sha512", 00:22:56.520 "dhgroup": "ffdhe6144" 00:22:56.520 } 00:22:56.520 } 00:22:56.520 ]' 00:22:56.520 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.520 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.520 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.520 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:56.520 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.780 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.780 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.780 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.780 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:56.780 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:57.720 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.291 00:22:58.291 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.291 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.291 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.291 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.291 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.291 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.291 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.291 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.291 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.291 { 00:22:58.291 "cntlid": 137, 00:22:58.291 "qid": 0, 00:22:58.291 "state": "enabled", 00:22:58.291 "thread": "nvmf_tgt_poll_group_000", 00:22:58.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:22:58.291 "listen_address": { 00:22:58.291 "trtype": "TCP", 00:22:58.291 "adrfam": "IPv4", 00:22:58.291 "traddr": "10.0.0.2", 00:22:58.291 "trsvcid": "4420" 00:22:58.291 }, 00:22:58.291 "peer_address": { 00:22:58.291 "trtype": "TCP", 00:22:58.291 "adrfam": "IPv4", 00:22:58.291 "traddr": "10.0.0.1", 00:22:58.291 "trsvcid": "58320" 00:22:58.291 }, 00:22:58.291 "auth": { 00:22:58.291 "state": "completed", 00:22:58.291 "digest": "sha512", 00:22:58.291 "dhgroup": "ffdhe8192" 00:22:58.291 } 00:22:58.291 } 00:22:58.292 ]' 00:22:58.292 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.552 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.552 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.552 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:58.552 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.552 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.552 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.552 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.813 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:58.813 06:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.383 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.643 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:59.643 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.643 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.644 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.905 00:23:00.167 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.167 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.167 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.167 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.167 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.167 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.167 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.167 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.167 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.167 { 00:23:00.167 "cntlid": 139, 00:23:00.167 "qid": 0, 00:23:00.167 "state": "enabled", 00:23:00.167 "thread": "nvmf_tgt_poll_group_000", 00:23:00.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:00.167 "listen_address": { 00:23:00.167 "trtype": "TCP", 00:23:00.167 "adrfam": "IPv4", 00:23:00.167 "traddr": "10.0.0.2", 00:23:00.167 "trsvcid": "4420" 00:23:00.167 }, 00:23:00.167 "peer_address": { 00:23:00.167 "trtype": "TCP", 00:23:00.167 "adrfam": "IPv4", 00:23:00.167 "traddr": "10.0.0.1", 00:23:00.167 "trsvcid": "58352" 00:23:00.167 }, 00:23:00.167 "auth": { 00:23:00.167 "state": "completed", 00:23:00.167 "digest": "sha512", 00:23:00.167 "dhgroup": "ffdhe8192" 00:23:00.167 } 00:23:00.167 } 00:23:00.167 ]' 00:23:00.167 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.167 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.167 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.428 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:00.428 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.428 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.428 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.428 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.688 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:23:00.688 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: --dhchap-ctrl-secret DHHC-1:02:OTY2ZTY2YTBkMGIzZWU5MmZlOTk5ZjQ3NDZjOWFiNzllOGUwMDE4YWVjZTM4ZTYyH5Hy1g==: 00:23:01.258 06:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.258 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:01.258 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.258 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.258 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.258 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.258 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.258 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.519 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.779 00:23:01.779 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.779 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.779 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.040 { 00:23:02.040 "cntlid": 141, 00:23:02.040 "qid": 0, 00:23:02.040 "state": "enabled", 00:23:02.040 "thread": "nvmf_tgt_poll_group_000", 00:23:02.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:02.040 "listen_address": { 00:23:02.040 "trtype": "TCP", 00:23:02.040 "adrfam": "IPv4", 00:23:02.040 "traddr": "10.0.0.2", 00:23:02.040 "trsvcid": "4420" 00:23:02.040 }, 00:23:02.040 "peer_address": { 00:23:02.040 "trtype": "TCP", 00:23:02.040 "adrfam": "IPv4", 00:23:02.040 "traddr": "10.0.0.1", 00:23:02.040 "trsvcid": "58380" 00:23:02.040 }, 00:23:02.040 "auth": { 00:23:02.040 "state": "completed", 00:23:02.040 "digest": "sha512", 00:23:02.040 "dhgroup": "ffdhe8192" 00:23:02.040 } 00:23:02.040 } 00:23:02.040 ]' 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.040 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.301 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:02.301 06:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.301 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.301 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.301 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.301 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:23:02.301 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:01:YmJhODg2NmU2YmRkODcwZjc0YWFiMWI5M2EwNjkyOTNbph2f: 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.241 06:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.241 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.811 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.811 { 00:23:03.811 "cntlid": 143, 00:23:03.811 "qid": 0, 00:23:03.811 "state": "enabled", 00:23:03.811 "thread": "nvmf_tgt_poll_group_000", 00:23:03.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:03.811 "listen_address": { 00:23:03.811 "trtype": "TCP", 00:23:03.811 "adrfam": "IPv4", 00:23:03.811 "traddr": "10.0.0.2", 00:23:03.811 "trsvcid": "4420" 00:23:03.811 }, 00:23:03.811 "peer_address": { 00:23:03.811 "trtype": "TCP", 00:23:03.811 "adrfam": "IPv4", 00:23:03.811 "traddr": "10.0.0.1", 00:23:03.811 "trsvcid": "53234" 00:23:03.811 }, 00:23:03.811 "auth": { 00:23:03.811 "state": "completed", 00:23:03.811 "digest": "sha512", 00:23:03.811 "dhgroup": "ffdhe8192" 00:23:03.811 } 00:23:03.811 } 00:23:03.811 ]' 00:23:03.811 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.071 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:23:04.331 06:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:04.901 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.162 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.423 00:23:05.423 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.423 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.423 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.684 { 00:23:05.684 "cntlid": 145, 00:23:05.684 "qid": 0, 00:23:05.684 "state": "enabled", 00:23:05.684 "thread": "nvmf_tgt_poll_group_000", 00:23:05.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:05.684 "listen_address": { 00:23:05.684 "trtype": "TCP", 00:23:05.684 "adrfam": "IPv4", 00:23:05.684 "traddr": "10.0.0.2", 00:23:05.684 "trsvcid": "4420" 00:23:05.684 }, 00:23:05.684 "peer_address": { 00:23:05.684 "trtype": "TCP", 00:23:05.684 "adrfam": "IPv4", 00:23:05.684 "traddr": "10.0.0.1", 00:23:05.684 "trsvcid": "53264" 00:23:05.684 }, 00:23:05.684 "auth": { 00:23:05.684 "state": "completed", 00:23:05.684 "digest": "sha512", 00:23:05.684 "dhgroup": "ffdhe8192" 00:23:05.684 } 00:23:05.684 } 00:23:05.684 ]' 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:05.684 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.945 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.945 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.945 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.945 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:23:05.945 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:YWIxNDY1NzM0NjY4NzFmNWFkMjFhMjczOGIxZWE5MTY1Y2Y4OThjZWJkNjQwYzRk+iQy5A==: --dhchap-ctrl-secret DHHC-1:03:ZmM4NmQzODYwODgyMjY3MzEzZGQyZjI0MGE3M2EyMmMzNWMxMzZmYzE1MTU3ZTBlNTAyY2FhZTVlZWY2ZjExNlZK9/A=: 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:06.887 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:07.148 request: 00:23:07.148 { 00:23:07.148 "name": "nvme0", 00:23:07.148 "trtype": "tcp", 00:23:07.148 "traddr": "10.0.0.2", 00:23:07.148 "adrfam": "ipv4", 00:23:07.148 "trsvcid": "4420", 00:23:07.148 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:07.148 "prchk_reftag": false, 00:23:07.148 "prchk_guard": false, 00:23:07.148 "hdgst": false, 00:23:07.148 "ddgst": false, 00:23:07.148 "dhchap_key": "key2", 00:23:07.148 "allow_unrecognized_csi": false, 00:23:07.148 "method": "bdev_nvme_attach_controller", 00:23:07.148 "req_id": 1 00:23:07.148 } 00:23:07.148 Got JSON-RPC error response 00:23:07.148 response: 00:23:07.148 { 00:23:07.148 "code": -5, 00:23:07.148 "message": "Input/output error" 00:23:07.148 } 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.148 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.718 request: 00:23:07.718 { 00:23:07.718 "name": "nvme0", 00:23:07.718 "trtype": "tcp", 00:23:07.718 "traddr": "10.0.0.2", 00:23:07.718 "adrfam": "ipv4", 00:23:07.718 "trsvcid": "4420", 00:23:07.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:07.718 "prchk_reftag": false, 00:23:07.718 "prchk_guard": false, 00:23:07.718 "hdgst": false, 00:23:07.718 "ddgst": false, 00:23:07.718 "dhchap_key": "key1", 00:23:07.718 "dhchap_ctrlr_key": "ckey2", 00:23:07.718 "allow_unrecognized_csi": false, 00:23:07.718 "method": "bdev_nvme_attach_controller", 00:23:07.718 "req_id": 1 00:23:07.718 } 00:23:07.719 Got JSON-RPC error response 00:23:07.719 response: 00:23:07.719 { 00:23:07.719 "code": -5, 00:23:07.719 "message": "Input/output error" 00:23:07.719 } 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.719 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.979 request: 00:23:07.979 { 00:23:07.979 "name": "nvme0", 00:23:07.979 "trtype": "tcp", 00:23:07.979 "traddr": "10.0.0.2", 00:23:07.979 "adrfam": "ipv4", 00:23:07.979 "trsvcid": "4420", 00:23:07.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:07.979 "prchk_reftag": false, 00:23:07.979 "prchk_guard": false, 00:23:07.979 "hdgst": false, 00:23:07.979 "ddgst": false, 00:23:07.979 "dhchap_key": "key1", 00:23:07.979 "dhchap_ctrlr_key": "ckey1", 00:23:07.979 "allow_unrecognized_csi": false, 00:23:07.979 "method": "bdev_nvme_attach_controller", 00:23:07.979 "req_id": 1 00:23:07.979 } 00:23:07.979 Got JSON-RPC error response 00:23:07.979 response: 00:23:07.979 { 00:23:07.979 "code": -5, 00:23:07.979 "message": "Input/output error" 00:23:07.979 } 00:23:07.979 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:07.979 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.979 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.979 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.979 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:07.979 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.979 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2686084 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2686084 ']' 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2686084 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2686084 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2686084' 00:23:08.240 killing process with pid 2686084 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2686084 00:23:08.240 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2686084 00:23:08.240 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:08.240 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.240 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.240 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2712092 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2712092 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2712092 ']' 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:08.241 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2712092 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2712092 ']' 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.181 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.442 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 null0 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Blx 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.usM ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.usM 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dk1 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.GA3 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GA3 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.iSo 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.iyI ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iyI 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.C37 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.443 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:10.382 nvme0n1 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.382 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.382 { 00:23:10.382 "cntlid": 1, 00:23:10.382 "qid": 0, 00:23:10.382 "state": "enabled", 00:23:10.382 "thread": "nvmf_tgt_poll_group_000", 00:23:10.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:10.382 "listen_address": { 00:23:10.382 "trtype": "TCP", 00:23:10.382 "adrfam": "IPv4", 00:23:10.382 "traddr": "10.0.0.2", 00:23:10.382 "trsvcid": "4420" 00:23:10.382 }, 00:23:10.382 "peer_address": { 00:23:10.382 "trtype": "TCP", 00:23:10.382 "adrfam": "IPv4", 00:23:10.382 "traddr": "10.0.0.1", 00:23:10.382 "trsvcid": "53310" 00:23:10.382 }, 00:23:10.382 "auth": { 00:23:10.382 "state": "completed", 00:23:10.382 "digest": "sha512", 00:23:10.382 "dhgroup": "ffdhe8192" 00:23:10.382 } 00:23:10.382 } 00:23:10.382 ]' 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.642 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.902 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:23:10.902 06:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:11.473 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.734 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.995 request: 00:23:11.995 { 00:23:11.995 "name": "nvme0", 00:23:11.995 "trtype": "tcp", 00:23:11.995 "traddr": "10.0.0.2", 00:23:11.995 "adrfam": "ipv4", 00:23:11.995 "trsvcid": "4420", 00:23:11.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:11.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:11.995 "prchk_reftag": false, 00:23:11.995 "prchk_guard": false, 00:23:11.995 "hdgst": false, 00:23:11.995 "ddgst": false, 00:23:11.995 "dhchap_key": "key3", 00:23:11.995 "allow_unrecognized_csi": false, 00:23:11.995 "method": "bdev_nvme_attach_controller", 00:23:11.995 "req_id": 1 00:23:11.995 } 00:23:11.995 Got JSON-RPC error response 00:23:11.995 response: 00:23:11.995 { 00:23:11.995 "code": -5, 00:23:11.995 "message": "Input/output error" 00:23:11.995 } 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:11.995 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:11.996 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:11.996 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.996 06:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:12.257 request: 00:23:12.257 { 00:23:12.257 "name": "nvme0", 00:23:12.257 "trtype": "tcp", 00:23:12.257 "traddr": "10.0.0.2", 00:23:12.257 "adrfam": "ipv4", 00:23:12.257 "trsvcid": "4420", 00:23:12.257 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:12.257 "prchk_reftag": false, 00:23:12.257 "prchk_guard": false, 00:23:12.257 "hdgst": false, 00:23:12.257 "ddgst": false, 00:23:12.257 "dhchap_key": "key3", 00:23:12.257 "allow_unrecognized_csi": false, 00:23:12.257 "method": "bdev_nvme_attach_controller", 00:23:12.257 "req_id": 1 00:23:12.257 } 00:23:12.257 Got JSON-RPC error response 00:23:12.257 response: 00:23:12.257 { 00:23:12.257 "code": -5, 00:23:12.257 "message": "Input/output error" 00:23:12.257 } 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.257 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.519 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:12.780 request: 00:23:12.780 { 00:23:12.780 "name": "nvme0", 00:23:12.780 "trtype": "tcp", 00:23:12.780 "traddr": "10.0.0.2", 00:23:12.780 "adrfam": "ipv4", 00:23:12.780 "trsvcid": "4420", 00:23:12.780 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:12.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:12.780 "prchk_reftag": false, 00:23:12.780 "prchk_guard": false, 00:23:12.780 "hdgst": false, 00:23:12.780 "ddgst": false, 00:23:12.780 "dhchap_key": "key0", 00:23:12.780 "dhchap_ctrlr_key": "key1", 00:23:12.780 "allow_unrecognized_csi": false, 00:23:12.780 "method": "bdev_nvme_attach_controller", 00:23:12.780 "req_id": 1 00:23:12.780 } 00:23:12.780 Got JSON-RPC error response 00:23:12.780 response: 00:23:12.780 { 00:23:12.780 "code": -5, 00:23:12.780 "message": "Input/output error" 00:23:12.780 } 00:23:12.780 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:12.780 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.780 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.780 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.780 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:12.780 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:12.780 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:13.041 nvme0n1 00:23:13.041 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:13.041 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:13.041 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.301 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.301 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.301 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.564 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:23:13.564 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.564 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.564 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.564 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:13.564 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:13.564 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:14.136 nvme0n1 00:23:14.136 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:14.136 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:14.136 06:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:14.397 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.657 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.657 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:23:14.657 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: --dhchap-ctrl-secret DHHC-1:03:NmFmMmJiNDJjMzUwNWU5OWUyM2UyZjIwMjUwYzkxNzkzNjM5MzUyMmE3YzEyMzMwZDQ1MDUwOGVhYjllOTM1MHXmufI=: 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:15.229 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.229 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:15.490 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:15.491 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:15.491 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:15.751 request: 00:23:15.751 { 00:23:15.751 "name": "nvme0", 00:23:15.751 "trtype": "tcp", 00:23:15.751 "traddr": "10.0.0.2", 00:23:15.751 "adrfam": "ipv4", 00:23:15.751 "trsvcid": "4420", 00:23:15.751 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:15.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:23:15.751 "prchk_reftag": false, 00:23:15.751 "prchk_guard": false, 00:23:15.751 "hdgst": false, 00:23:15.751 "ddgst": false, 00:23:15.751 "dhchap_key": "key1", 00:23:15.751 "allow_unrecognized_csi": false, 00:23:15.751 "method": "bdev_nvme_attach_controller", 00:23:15.751 "req_id": 1 00:23:15.751 } 00:23:15.751 Got JSON-RPC error response 00:23:15.751 response: 00:23:15.751 { 00:23:15.751 "code": -5, 00:23:15.751 "message": "Input/output error" 00:23:15.751 } 00:23:15.751 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:15.751 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:15.751 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:15.751 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:15.751 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:15.751 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:15.751 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:16.692 nvme0n1 00:23:16.692 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:16.692 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:16.692 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.692 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.692 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.692 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.952 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:16.952 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.952 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.952 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.952 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:16.953 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:16.953 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:17.213 nvme0n1 00:23:17.213 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:17.213 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:17.213 06:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: '' 2s 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: ]] 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MzZmNDIyYzEwYmRkNTEyY2U4MjMyNmY3NGJiYjUzODc12hgg: 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:17.474 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: 2s 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: 00:23:20.017 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:20.018 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:20.018 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:20.018 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: ]] 00:23:20.018 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZmUxY2E5YWVhNDdkNjlmY2FhZTI4NDZlNWJlZWRkNjVkNjNkMDZkZTBjZTFhNDU2WdpsFw==: 00:23:20.018 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:20.018 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:21.947 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:21.947 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:23:21.947 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:21.947 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:21.947 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:21.948 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:22.544 nvme0n1 00:23:22.544 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.544 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.544 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.544 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.544 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.544 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:22.805 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:22.805 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:22.805 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.065 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.065 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:23.065 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.065 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.065 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.065 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:23.065 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.325 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:23.895 request: 00:23:23.895 { 00:23:23.895 "name": "nvme0", 00:23:23.895 "dhchap_key": "key1", 00:23:23.895 "dhchap_ctrlr_key": "key3", 00:23:23.895 "method": "bdev_nvme_set_keys", 00:23:23.895 "req_id": 1 00:23:23.895 } 00:23:23.895 Got JSON-RPC error response 00:23:23.895 response: 00:23:23.895 { 00:23:23.895 "code": -13, 00:23:23.895 "message": "Permission denied" 00:23:23.895 } 00:23:23.895 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:23.895 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.895 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.895 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.895 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:23.896 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:23.896 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.156 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:24.156 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:25.095 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:25.095 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:25.095 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.355 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:25.925 nvme0n1 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:25.925 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.926 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:25.926 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.926 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:25.926 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:26.496 request: 00:23:26.496 { 00:23:26.496 "name": "nvme0", 00:23:26.496 "dhchap_key": "key2", 00:23:26.496 "dhchap_ctrlr_key": "key0", 00:23:26.496 "method": "bdev_nvme_set_keys", 00:23:26.496 "req_id": 1 00:23:26.496 } 00:23:26.496 Got JSON-RPC error response 00:23:26.496 response: 00:23:26.496 { 00:23:26.496 "code": -13, 00:23:26.496 "message": "Permission denied" 00:23:26.496 } 00:23:26.496 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:26.496 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.496 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.496 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.496 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:26.496 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:26.496 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.758 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:26.758 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:27.701 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:27.701 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:27.701 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2686124 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2686124 ']' 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2686124 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2686124 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2686124' 00:23:27.961 killing process with pid 2686124 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2686124 00:23:27.961 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2686124 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.222 rmmod nvme_tcp 00:23:28.222 rmmod nvme_fabrics 00:23:28.222 rmmod nvme_keyring 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2712092 ']' 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2712092 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2712092 ']' 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2712092 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:28.222 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2712092 00:23:28.222 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:28.222 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:28.222 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2712092' 00:23:28.222 killing process with pid 2712092 00:23:28.222 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2712092 00:23:28.222 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2712092 00:23:28.483 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.483 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.483 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.483 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:28.483 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:28.483 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.483 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.484 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.484 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.484 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.484 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.484 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Blx /tmp/spdk.key-sha256.dk1 /tmp/spdk.key-sha384.iSo /tmp/spdk.key-sha512.C37 /tmp/spdk.key-sha512.usM /tmp/spdk.key-sha384.GA3 /tmp/spdk.key-sha256.iyI '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:30.396 00:23:30.396 real 2m36.844s 00:23:30.396 user 5m52.415s 00:23:30.396 sys 0m24.965s 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.396 ************************************ 00:23:30.396 END TEST nvmf_auth_target 00:23:30.396 ************************************ 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.396 06:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:30.658 ************************************ 00:23:30.658 START TEST nvmf_bdevio_no_huge 00:23:30.658 ************************************ 00:23:30.658 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:30.658 * Looking for test storage... 00:23:30.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:30.658 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:30.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.659 --rc genhtml_branch_coverage=1 00:23:30.659 --rc genhtml_function_coverage=1 00:23:30.659 --rc genhtml_legend=1 00:23:30.659 --rc geninfo_all_blocks=1 00:23:30.659 --rc geninfo_unexecuted_blocks=1 00:23:30.659 00:23:30.659 ' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:30.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.659 --rc genhtml_branch_coverage=1 00:23:30.659 --rc genhtml_function_coverage=1 00:23:30.659 --rc genhtml_legend=1 00:23:30.659 --rc geninfo_all_blocks=1 00:23:30.659 --rc geninfo_unexecuted_blocks=1 00:23:30.659 00:23:30.659 ' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:30.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.659 --rc genhtml_branch_coverage=1 00:23:30.659 --rc genhtml_function_coverage=1 00:23:30.659 --rc genhtml_legend=1 00:23:30.659 --rc geninfo_all_blocks=1 00:23:30.659 --rc geninfo_unexecuted_blocks=1 00:23:30.659 00:23:30.659 ' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:30.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.659 --rc genhtml_branch_coverage=1 00:23:30.659 --rc genhtml_function_coverage=1 00:23:30.659 --rc genhtml_legend=1 00:23:30.659 --rc geninfo_all_blocks=1 00:23:30.659 --rc geninfo_unexecuted_blocks=1 00:23:30.659 00:23:30.659 ' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.659 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.660 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.801 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.801 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.801 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.801 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.801 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:38.802 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:38.802 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:38.802 Found net devices under 0000:31:00.0: cvl_0_0 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:38.802 Found net devices under 0000:31:00.1: cvl_0_1 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.802 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:23:38.802 00:23:38.802 --- 10.0.0.2 ping statistics --- 00:23:38.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.802 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:23:38.802 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:23:38.802 00:23:38.802 --- 10.0.0.1 ping statistics --- 00:23:38.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.802 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2720345 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2720345 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 2720345 ']' 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:38.803 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:38.803 [2024-11-20 06:34:58.272371] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:23:38.803 [2024-11-20 06:34:58.272445] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:38.803 [2024-11-20 06:34:58.382512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.803 [2024-11-20 06:34:58.442487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.803 [2024-11-20 06:34:58.442533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.803 [2024-11-20 06:34:58.442541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.803 [2024-11-20 06:34:58.442549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.803 [2024-11-20 06:34:58.442559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.803 [2024-11-20 06:34:58.444476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:38.803 [2024-11-20 06:34:58.444638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:38.803 [2024-11-20 06:34:58.444802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:38.803 [2024-11-20 06:34:58.444839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.374 [2024-11-20 06:34:59.146091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.374 Malloc0 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:39.374 [2024-11-20 06:34:59.200025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:39.374 { 00:23:39.374 "params": { 00:23:39.374 "name": "Nvme$subsystem", 00:23:39.374 "trtype": "$TEST_TRANSPORT", 00:23:39.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.374 "adrfam": "ipv4", 00:23:39.374 "trsvcid": "$NVMF_PORT", 00:23:39.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.374 "hdgst": ${hdgst:-false}, 00:23:39.374 "ddgst": ${ddgst:-false} 00:23:39.374 }, 00:23:39.374 "method": "bdev_nvme_attach_controller" 00:23:39.374 } 00:23:39.374 EOF 00:23:39.374 )") 00:23:39.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:39.375 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:39.375 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:39.375 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:39.375 "params": { 00:23:39.375 "name": "Nvme1", 00:23:39.375 "trtype": "tcp", 00:23:39.375 "traddr": "10.0.0.2", 00:23:39.375 "adrfam": "ipv4", 00:23:39.375 "trsvcid": "4420", 00:23:39.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.375 "hdgst": false, 00:23:39.375 "ddgst": false 00:23:39.375 }, 00:23:39.375 "method": "bdev_nvme_attach_controller" 00:23:39.375 }' 00:23:39.375 [2024-11-20 06:34:59.259128] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:23:39.375 [2024-11-20 06:34:59.259202] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2720641 ] 00:23:39.635 [2024-11-20 06:34:59.356576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:39.635 [2024-11-20 06:34:59.416664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.635 [2024-11-20 06:34:59.416831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.635 [2024-11-20 06:34:59.417043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.896 I/O targets: 00:23:39.896 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:39.896 00:23:39.896 00:23:39.896 CUnit - A unit testing framework for C - Version 2.1-3 00:23:39.896 http://cunit.sourceforge.net/ 00:23:39.896 00:23:39.896 00:23:39.896 Suite: bdevio tests on: Nvme1n1 00:23:39.896 Test: blockdev write read block ...passed 00:23:39.896 Test: blockdev write zeroes read block ...passed 00:23:39.896 Test: blockdev write zeroes read no split ...passed 00:23:39.896 Test: blockdev write zeroes read split ...passed 00:23:40.156 Test: blockdev write zeroes read split partial ...passed 00:23:40.156 Test: blockdev reset ...[2024-11-20 06:34:59.858322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:40.156 [2024-11-20 06:34:59.858435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1744400 (9): Bad file descriptor 00:23:40.156 [2024-11-20 06:34:59.919767] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:40.156 passed 00:23:40.156 Test: blockdev write read 8 blocks ...passed 00:23:40.156 Test: blockdev write read size > 128k ...passed 00:23:40.156 Test: blockdev write read invalid size ...passed 00:23:40.156 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:40.156 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:40.156 Test: blockdev write read max offset ...passed 00:23:40.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:40.417 Test: blockdev writev readv 8 blocks ...passed 00:23:40.417 Test: blockdev writev readv 30 x 1block ...passed 00:23:40.417 Test: blockdev writev readv block ...passed 00:23:40.417 Test: blockdev writev readv size > 128k ...passed 00:23:40.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:40.417 Test: blockdev comparev and writev ...[2024-11-20 06:35:00.184103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.184154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.184179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.184188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.184742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.184762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.184777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.184785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.185371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.185382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.185396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.185404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.186032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.186043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.186057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:40.417 [2024-11-20 06:35:00.186066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:40.417 passed 00:23:40.417 Test: blockdev nvme passthru rw ...passed 00:23:40.417 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:35:00.269697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.417 [2024-11-20 06:35:00.269712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.270103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.417 [2024-11-20 06:35:00.270116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.270473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.417 [2024-11-20 06:35:00.270484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:40.417 [2024-11-20 06:35:00.270879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.417 [2024-11-20 06:35:00.270889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:40.417 passed 00:23:40.417 Test: blockdev nvme admin passthru ...passed 00:23:40.417 Test: blockdev copy ...passed 00:23:40.417 00:23:40.417 Run Summary: Type Total Ran Passed Failed Inactive 00:23:40.417 suites 1 1 n/a 0 0 00:23:40.417 tests 23 23 23 0 0 00:23:40.417 asserts 152 152 152 0 n/a 00:23:40.417 00:23:40.417 Elapsed time = 1.304 seconds 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.989 rmmod nvme_tcp 00:23:40.989 rmmod nvme_fabrics 00:23:40.989 rmmod nvme_keyring 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2720345 ']' 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2720345 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 2720345 ']' 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 2720345 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2720345 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2720345' 00:23:40.989 killing process with pid 2720345 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 2720345 00:23:40.989 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 2720345 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.249 06:35:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.794 00:23:43.794 real 0m12.777s 00:23:43.794 user 0m14.951s 00:23:43.794 sys 0m6.749s 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:43.794 ************************************ 00:23:43.794 END TEST nvmf_bdevio_no_huge 00:23:43.794 ************************************ 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:43.794 ************************************ 00:23:43.794 START TEST nvmf_tls 00:23:43.794 ************************************ 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:43.794 * Looking for test storage... 00:23:43.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.794 --rc genhtml_branch_coverage=1 00:23:43.794 --rc genhtml_function_coverage=1 00:23:43.794 --rc genhtml_legend=1 00:23:43.794 --rc geninfo_all_blocks=1 00:23:43.794 --rc geninfo_unexecuted_blocks=1 00:23:43.794 00:23:43.794 ' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.794 --rc genhtml_branch_coverage=1 00:23:43.794 --rc genhtml_function_coverage=1 00:23:43.794 --rc genhtml_legend=1 00:23:43.794 --rc geninfo_all_blocks=1 00:23:43.794 --rc geninfo_unexecuted_blocks=1 00:23:43.794 00:23:43.794 ' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.794 --rc genhtml_branch_coverage=1 00:23:43.794 --rc genhtml_function_coverage=1 00:23:43.794 --rc genhtml_legend=1 00:23:43.794 --rc geninfo_all_blocks=1 00:23:43.794 --rc geninfo_unexecuted_blocks=1 00:23:43.794 00:23:43.794 ' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.794 --rc genhtml_branch_coverage=1 00:23:43.794 --rc genhtml_function_coverage=1 00:23:43.794 --rc genhtml_legend=1 00:23:43.794 --rc geninfo_all_blocks=1 00:23:43.794 --rc geninfo_unexecuted_blocks=1 00:23:43.794 00:23:43.794 ' 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:43.794 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:43.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.795 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:51.934 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:51.934 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:51.934 Found net devices under 0000:31:00.0: cvl_0_0 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:51.934 Found net devices under 0000:31:00.1: cvl_0_1 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.934 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:23:51.935 00:23:51.935 --- 10.0.0.2 ping statistics --- 00:23:51.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.935 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:23:51.935 06:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:23:51.935 00:23:51.935 --- 10.0.0.1 ping statistics --- 00:23:51.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.935 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2725199 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2725199 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2725199 ']' 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:51.935 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.935 [2024-11-20 06:35:11.124871] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:23:51.935 [2024-11-20 06:35:11.124938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.935 [2024-11-20 06:35:11.227494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.935 [2024-11-20 06:35:11.278270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.935 [2024-11-20 06:35:11.278323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.935 [2024-11-20 06:35:11.278331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.935 [2024-11-20 06:35:11.278339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.935 [2024-11-20 06:35:11.278345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.935 [2024-11-20 06:35:11.279191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.197 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:52.197 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:23:52.197 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.198 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.198 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.198 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.198 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:52.198 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:52.459 true 00:23:52.459 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.459 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:52.459 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:52.459 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:52.720 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:52.720 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:52.720 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:52.981 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:52.981 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:52.981 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:53.242 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:53.242 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:53.242 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:53.242 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:53.242 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:53.242 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:53.502 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:53.502 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:53.502 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:53.763 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:53.763 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:54.023 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:54.023 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:54.023 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:54.024 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:54.024 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.2yC3tb4r6Q 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.CoDLXYms0w 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2yC3tb4r6Q 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.CoDLXYms0w 00:23:54.284 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:54.545 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:54.805 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.2yC3tb4r6Q 00:23:54.805 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2yC3tb4r6Q 00:23:54.805 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.805 [2024-11-20 06:35:14.702855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.805 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.066 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.327 [2024-11-20 06:35:15.023629] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.327 [2024-11-20 06:35:15.023832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.327 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.327 malloc0 00:23:55.327 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.586 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2yC3tb4r6Q 00:23:55.847 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.847 06:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2yC3tb4r6Q 00:24:08.103 Initializing NVMe Controllers 00:24:08.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:08.103 Initialization complete. Launching workers. 00:24:08.103 ======================================================== 00:24:08.103 Latency(us) 00:24:08.103 Device Information : IOPS MiB/s Average min max 00:24:08.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18861.14 73.68 3393.42 1044.77 3970.24 00:24:08.103 ======================================================== 00:24:08.103 Total : 18861.14 73.68 3393.42 1044.77 3970.24 00:24:08.103 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2yC3tb4r6Q 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2yC3tb4r6Q 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2728095 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2728095 /var/tmp/bdevperf.sock 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2728095 ']' 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:08.103 06:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.103 [2024-11-20 06:35:25.864993] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:08.103 [2024-11-20 06:35:25.865049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728095 ] 00:24:08.103 [2024-11-20 06:35:25.953344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.103 [2024-11-20 06:35:25.988353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.103 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:08.103 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:08.103 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2yC3tb4r6Q 00:24:08.103 06:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.103 [2024-11-20 06:35:26.989480] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.103 TLSTESTn1 00:24:08.103 06:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:08.103 Running I/O for 10 seconds... 00:24:09.305 4994.00 IOPS, 19.51 MiB/s [2024-11-20T05:35:30.635Z] 5005.00 IOPS, 19.55 MiB/s [2024-11-20T05:35:31.205Z] 5308.33 IOPS, 20.74 MiB/s [2024-11-20T05:35:32.586Z] 5318.00 IOPS, 20.77 MiB/s [2024-11-20T05:35:33.526Z] 5059.20 IOPS, 19.76 MiB/s [2024-11-20T05:35:34.465Z] 5123.83 IOPS, 20.01 MiB/s [2024-11-20T05:35:35.405Z] 5289.71 IOPS, 20.66 MiB/s [2024-11-20T05:35:36.346Z] 5303.38 IOPS, 20.72 MiB/s [2024-11-20T05:35:37.285Z] 5266.78 IOPS, 20.57 MiB/s [2024-11-20T05:35:37.285Z] 5212.10 IOPS, 20.36 MiB/s 00:24:17.365 Latency(us) 00:24:17.365 [2024-11-20T05:35:37.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.365 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:17.365 Verification LBA range: start 0x0 length 0x2000 00:24:17.365 TLSTESTn1 : 10.01 5217.67 20.38 0.00 0.00 24495.08 6280.53 83012.27 00:24:17.365 [2024-11-20T05:35:37.285Z] =================================================================================================================== 00:24:17.365 [2024-11-20T05:35:37.285Z] Total : 5217.67 20.38 0.00 0.00 24495.08 6280.53 83012.27 00:24:17.365 { 00:24:17.365 "results": [ 00:24:17.365 { 00:24:17.365 "job": "TLSTESTn1", 00:24:17.365 "core_mask": "0x4", 00:24:17.365 "workload": "verify", 00:24:17.365 "status": "finished", 00:24:17.365 "verify_range": { 00:24:17.365 "start": 0, 00:24:17.365 "length": 8192 00:24:17.365 }, 00:24:17.365 "queue_depth": 128, 00:24:17.365 "io_size": 4096, 00:24:17.365 "runtime": 10.013672, 00:24:17.365 "iops": 5217.666406489048, 00:24:17.365 "mibps": 20.381509400347845, 00:24:17.365 "io_failed": 0, 00:24:17.365 "io_timeout": 0, 00:24:17.365 "avg_latency_us": 24495.080324605726, 00:24:17.365 "min_latency_us": 6280.533333333334, 00:24:17.365 "max_latency_us": 83012.26666666666 00:24:17.365 } 00:24:17.365 ], 00:24:17.365 "core_count": 1 00:24:17.365 } 00:24:17.365 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.365 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2728095 00:24:17.365 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2728095 ']' 00:24:17.365 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2728095 00:24:17.365 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:17.365 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:17.365 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2728095 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2728095' 00:24:17.626 killing process with pid 2728095 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2728095 00:24:17.626 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.626 00:24:17.626 Latency(us) 00:24:17.626 [2024-11-20T05:35:37.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.626 [2024-11-20T05:35:37.546Z] =================================================================================================================== 00:24:17.626 [2024-11-20T05:35:37.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2728095 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CoDLXYms0w 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CoDLXYms0w 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CoDLXYms0w 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CoDLXYms0w 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2730397 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.626 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2730397 /var/tmp/bdevperf.sock 00:24:17.627 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.627 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2730397 ']' 00:24:17.627 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.627 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.627 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.627 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.627 06:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.627 [2024-11-20 06:35:37.472538] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:17.627 [2024-11-20 06:35:37.472595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730397 ] 00:24:17.887 [2024-11-20 06:35:37.555909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.887 [2024-11-20 06:35:37.584793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.458 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:18.458 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:18.458 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CoDLXYms0w 00:24:18.719 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.719 [2024-11-20 06:35:38.572441] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.719 [2024-11-20 06:35:38.577043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:18.719 [2024-11-20 06:35:38.577658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2427bc0 (107): Transport endpoint is not connected 00:24:18.719 [2024-11-20 06:35:38.578654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2427bc0 (9): Bad file descriptor 00:24:18.719 [2024-11-20 06:35:38.579656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:18.719 [2024-11-20 06:35:38.579663] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:18.719 [2024-11-20 06:35:38.579668] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:18.719 [2024-11-20 06:35:38.579676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:18.719 request: 00:24:18.719 { 00:24:18.719 "name": "TLSTEST", 00:24:18.719 "trtype": "tcp", 00:24:18.719 "traddr": "10.0.0.2", 00:24:18.719 "adrfam": "ipv4", 00:24:18.719 "trsvcid": "4420", 00:24:18.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.719 "prchk_reftag": false, 00:24:18.719 "prchk_guard": false, 00:24:18.719 "hdgst": false, 00:24:18.719 "ddgst": false, 00:24:18.719 "psk": "key0", 00:24:18.719 "allow_unrecognized_csi": false, 00:24:18.719 "method": "bdev_nvme_attach_controller", 00:24:18.719 "req_id": 1 00:24:18.719 } 00:24:18.719 Got JSON-RPC error response 00:24:18.719 response: 00:24:18.719 { 00:24:18.719 "code": -5, 00:24:18.719 "message": "Input/output error" 00:24:18.719 } 00:24:18.719 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2730397 00:24:18.719 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2730397 ']' 00:24:18.719 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2730397 00:24:18.719 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:18.719 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:18.719 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2730397 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2730397' 00:24:18.980 killing process with pid 2730397 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2730397 00:24:18.980 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.980 00:24:18.980 Latency(us) 00:24:18.980 [2024-11-20T05:35:38.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.980 [2024-11-20T05:35:38.900Z] =================================================================================================================== 00:24:18.980 [2024-11-20T05:35:38.900Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2730397 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2yC3tb4r6Q 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2yC3tb4r6Q 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2yC3tb4r6Q 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2yC3tb4r6Q 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2730557 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2730557 /var/tmp/bdevperf.sock 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2730557 ']' 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:18.980 06:35:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.980 [2024-11-20 06:35:38.837378] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:18.980 [2024-11-20 06:35:38.837450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730557 ] 00:24:19.241 [2024-11-20 06:35:38.922836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.241 [2024-11-20 06:35:38.951666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.811 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.811 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:19.811 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2yC3tb4r6Q 00:24:20.072 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:20.072 [2024-11-20 06:35:39.935262] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.072 [2024-11-20 06:35:39.939813] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:20.072 [2024-11-20 06:35:39.939834] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:20.072 [2024-11-20 06:35:39.939853] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:20.072 [2024-11-20 06:35:39.940495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128ebc0 (107): Transport endpoint is not connected 00:24:20.072 [2024-11-20 06:35:39.941490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x128ebc0 (9): Bad file descriptor 00:24:20.072 [2024-11-20 06:35:39.942492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:20.072 [2024-11-20 06:35:39.942498] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:20.072 [2024-11-20 06:35:39.942505] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:20.072 [2024-11-20 06:35:39.942513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:20.072 request: 00:24:20.072 { 00:24:20.072 "name": "TLSTEST", 00:24:20.072 "trtype": "tcp", 00:24:20.073 "traddr": "10.0.0.2", 00:24:20.073 "adrfam": "ipv4", 00:24:20.073 "trsvcid": "4420", 00:24:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.073 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:20.073 "prchk_reftag": false, 00:24:20.073 "prchk_guard": false, 00:24:20.073 "hdgst": false, 00:24:20.073 "ddgst": false, 00:24:20.073 "psk": "key0", 00:24:20.073 "allow_unrecognized_csi": false, 00:24:20.073 "method": "bdev_nvme_attach_controller", 00:24:20.073 "req_id": 1 00:24:20.073 } 00:24:20.073 Got JSON-RPC error response 00:24:20.073 response: 00:24:20.073 { 00:24:20.073 "code": -5, 00:24:20.073 "message": "Input/output error" 00:24:20.073 } 00:24:20.073 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2730557 00:24:20.073 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2730557 ']' 00:24:20.073 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2730557 00:24:20.073 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:20.073 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:20.073 06:35:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2730557 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2730557' 00:24:20.334 killing process with pid 2730557 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2730557 00:24:20.334 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.334 00:24:20.334 Latency(us) 00:24:20.334 [2024-11-20T05:35:40.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.334 [2024-11-20T05:35:40.254Z] =================================================================================================================== 00:24:20.334 [2024-11-20T05:35:40.254Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2730557 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2yC3tb4r6Q 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2yC3tb4r6Q 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2yC3tb4r6Q 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2yC3tb4r6Q 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2730805 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2730805 /var/tmp/bdevperf.sock 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2730805 ']' 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:20.334 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.334 [2024-11-20 06:35:40.172046] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:20.334 [2024-11-20 06:35:40.172102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730805 ] 00:24:20.595 [2024-11-20 06:35:40.258826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.595 [2024-11-20 06:35:40.287855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.166 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:21.166 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:21.166 06:35:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2yC3tb4r6Q 00:24:21.425 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.426 [2024-11-20 06:35:41.295742] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.426 [2024-11-20 06:35:41.301142] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:21.426 [2024-11-20 06:35:41.301162] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:21.426 [2024-11-20 06:35:41.301180] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:21.426 [2024-11-20 06:35:41.301801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fcbc0 (107): Transport endpoint is not connected 00:24:21.426 [2024-11-20 06:35:41.302797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fcbc0 (9): Bad file descriptor 00:24:21.426 [2024-11-20 06:35:41.303799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:21.426 [2024-11-20 06:35:41.303807] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:21.426 [2024-11-20 06:35:41.303813] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:21.426 [2024-11-20 06:35:41.303821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:21.426 request: 00:24:21.426 { 00:24:21.426 "name": "TLSTEST", 00:24:21.426 "trtype": "tcp", 00:24:21.426 "traddr": "10.0.0.2", 00:24:21.426 "adrfam": "ipv4", 00:24:21.426 "trsvcid": "4420", 00:24:21.426 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:21.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.426 "prchk_reftag": false, 00:24:21.426 "prchk_guard": false, 00:24:21.426 "hdgst": false, 00:24:21.426 "ddgst": false, 00:24:21.426 "psk": "key0", 00:24:21.426 "allow_unrecognized_csi": false, 00:24:21.426 "method": "bdev_nvme_attach_controller", 00:24:21.426 "req_id": 1 00:24:21.426 } 00:24:21.426 Got JSON-RPC error response 00:24:21.426 response: 00:24:21.426 { 00:24:21.426 "code": -5, 00:24:21.426 "message": "Input/output error" 00:24:21.426 } 00:24:21.426 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2730805 00:24:21.426 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2730805 ']' 00:24:21.426 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2730805 00:24:21.426 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2730805 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2730805' 00:24:21.686 killing process with pid 2730805 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2730805 00:24:21.686 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.686 00:24:21.686 Latency(us) 00:24:21.686 [2024-11-20T05:35:41.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.686 [2024-11-20T05:35:41.606Z] =================================================================================================================== 00:24:21.686 [2024-11-20T05:35:41.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2730805 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2731155 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2731155 /var/tmp/bdevperf.sock 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2731155 ']' 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:21.686 06:35:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.686 [2024-11-20 06:35:41.553511] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:21.686 [2024-11-20 06:35:41.553567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731155 ] 00:24:21.946 [2024-11-20 06:35:41.637986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.946 [2024-11-20 06:35:41.666689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.516 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:22.516 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:22.516 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:22.776 [2024-11-20 06:35:42.477772] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:22.776 [2024-11-20 06:35:42.477794] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:22.776 request: 00:24:22.776 { 00:24:22.776 "name": "key0", 00:24:22.776 "path": "", 00:24:22.776 "method": "keyring_file_add_key", 00:24:22.776 "req_id": 1 00:24:22.776 } 00:24:22.777 Got JSON-RPC error response 00:24:22.777 response: 00:24:22.777 { 00:24:22.777 "code": -1, 00:24:22.777 "message": "Operation not permitted" 00:24:22.777 } 00:24:22.777 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.777 [2024-11-20 06:35:42.646280] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.777 [2024-11-20 06:35:42.646305] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:22.777 request: 00:24:22.777 { 00:24:22.777 "name": "TLSTEST", 00:24:22.777 "trtype": "tcp", 00:24:22.777 "traddr": "10.0.0.2", 00:24:22.777 "adrfam": "ipv4", 00:24:22.777 "trsvcid": "4420", 00:24:22.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.777 "prchk_reftag": false, 00:24:22.777 "prchk_guard": false, 00:24:22.777 "hdgst": false, 00:24:22.777 "ddgst": false, 00:24:22.777 "psk": "key0", 00:24:22.777 "allow_unrecognized_csi": false, 00:24:22.777 "method": "bdev_nvme_attach_controller", 00:24:22.777 "req_id": 1 00:24:22.777 } 00:24:22.777 Got JSON-RPC error response 00:24:22.777 response: 00:24:22.777 { 00:24:22.777 "code": -126, 00:24:22.777 "message": "Required key not available" 00:24:22.777 } 00:24:22.777 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2731155 00:24:22.777 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2731155 ']' 00:24:22.777 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2731155 00:24:22.777 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:22.777 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:22.777 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2731155 00:24:23.036 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:23.036 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:23.036 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2731155' 00:24:23.036 killing process with pid 2731155 00:24:23.036 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2731155 00:24:23.036 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.036 00:24:23.036 Latency(us) 00:24:23.036 [2024-11-20T05:35:42.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.036 [2024-11-20T05:35:42.957Z] =================================================================================================================== 00:24:23.037 [2024-11-20T05:35:42.957Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2731155 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2725199 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2725199 ']' 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2725199 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2725199 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2725199' 00:24:23.037 killing process with pid 2725199 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2725199 00:24:23.037 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2725199 00:24:23.297 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:23.297 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:23.297 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:23.297 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:23.297 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:23.297 06:35:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.OlsF664L0z 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.OlsF664L0z 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2731500 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2731500 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2731500 ']' 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:23.297 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.297 [2024-11-20 06:35:43.108483] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:23.297 [2024-11-20 06:35:43.108538] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.297 [2024-11-20 06:35:43.200560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.557 [2024-11-20 06:35:43.230035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.557 [2024-11-20 06:35:43.230067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.557 [2024-11-20 06:35:43.230072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.557 [2024-11-20 06:35:43.230077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.557 [2024-11-20 06:35:43.230081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.557 [2024-11-20 06:35:43.230599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.OlsF664L0z 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OlsF664L0z 00:24:24.127 06:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:24.386 [2024-11-20 06:35:44.084029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.386 06:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:24.386 06:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:24.645 [2024-11-20 06:35:44.404822] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.645 [2024-11-20 06:35:44.405017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.645 06:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:24.906 malloc0 00:24:24.906 06:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:24.906 06:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:25.167 06:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.167 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlsF664L0z 00:24:25.167 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:25.167 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:25.167 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:25.167 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OlsF664L0z 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2731867 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2731867 /var/tmp/bdevperf.sock 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2731867 ']' 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:25.168 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.428 [2024-11-20 06:35:45.109810] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:25.428 [2024-11-20 06:35:45.109863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731867 ] 00:24:25.428 [2024-11-20 06:35:45.192686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.428 [2024-11-20 06:35:45.221837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.030 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:26.030 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:26.030 06:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:26.337 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:26.337 [2024-11-20 06:35:46.205489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:26.620 TLSTESTn1 00:24:26.620 06:35:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:26.620 Running I/O for 10 seconds... 00:24:28.601 5353.00 IOPS, 20.91 MiB/s [2024-11-20T05:35:49.461Z] 5705.00 IOPS, 22.29 MiB/s [2024-11-20T05:35:50.843Z] 5463.33 IOPS, 21.34 MiB/s [2024-11-20T05:35:51.414Z] 5730.25 IOPS, 22.38 MiB/s [2024-11-20T05:35:52.798Z] 5695.60 IOPS, 22.25 MiB/s [2024-11-20T05:35:53.739Z] 5657.83 IOPS, 22.10 MiB/s [2024-11-20T05:35:54.679Z] 5775.86 IOPS, 22.56 MiB/s [2024-11-20T05:35:55.622Z] 5771.38 IOPS, 22.54 MiB/s [2024-11-20T05:35:56.564Z] 5741.89 IOPS, 22.43 MiB/s [2024-11-20T05:35:56.564Z] 5672.30 IOPS, 22.16 MiB/s 00:24:36.645 Latency(us) 00:24:36.645 [2024-11-20T05:35:56.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.645 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:36.645 Verification LBA range: start 0x0 length 0x2000 00:24:36.645 TLSTESTn1 : 10.02 5673.48 22.16 0.00 0.00 22522.67 6526.29 25012.91 00:24:36.645 [2024-11-20T05:35:56.565Z] =================================================================================================================== 00:24:36.645 [2024-11-20T05:35:56.565Z] Total : 5673.48 22.16 0.00 0.00 22522.67 6526.29 25012.91 00:24:36.645 { 00:24:36.645 "results": [ 00:24:36.645 { 00:24:36.645 "job": "TLSTESTn1", 00:24:36.645 "core_mask": "0x4", 00:24:36.645 "workload": "verify", 00:24:36.645 "status": "finished", 00:24:36.645 "verify_range": { 00:24:36.645 "start": 0, 00:24:36.645 "length": 8192 00:24:36.645 }, 00:24:36.645 "queue_depth": 128, 00:24:36.645 "io_size": 4096, 00:24:36.645 "runtime": 10.020309, 00:24:36.645 "iops": 5673.477734069877, 00:24:36.645 "mibps": 22.16202239871046, 00:24:36.645 "io_failed": 0, 00:24:36.645 "io_timeout": 0, 00:24:36.645 "avg_latency_us": 22522.666674171796, 00:24:36.645 "min_latency_us": 6526.293333333333, 00:24:36.645 "max_latency_us": 25012.906666666666 00:24:36.645 } 00:24:36.645 ], 00:24:36.645 "core_count": 1 00:24:36.645 } 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2731867 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2731867 ']' 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2731867 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2731867 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2731867' 00:24:36.645 killing process with pid 2731867 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2731867 00:24:36.645 Received shutdown signal, test time was about 10.000000 seconds 00:24:36.645 00:24:36.645 Latency(us) 00:24:36.645 [2024-11-20T05:35:56.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.645 [2024-11-20T05:35:56.565Z] =================================================================================================================== 00:24:36.645 [2024-11-20T05:35:56.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.645 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2731867 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.OlsF664L0z 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlsF664L0z 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlsF664L0z 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlsF664L0z 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OlsF664L0z 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2734214 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2734214 /var/tmp/bdevperf.sock 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2734214 ']' 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:36.906 06:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.906 [2024-11-20 06:35:56.693425] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:36.906 [2024-11-20 06:35:56.693494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734214 ] 00:24:36.906 [2024-11-20 06:35:56.779280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.906 [2024-11-20 06:35:56.808050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.846 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:37.846 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:37.846 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:37.846 [2024-11-20 06:35:57.603046] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OlsF664L0z': 0100666 00:24:37.846 [2024-11-20 06:35:57.603065] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:37.846 request: 00:24:37.846 { 00:24:37.846 "name": "key0", 00:24:37.846 "path": "/tmp/tmp.OlsF664L0z", 00:24:37.846 "method": "keyring_file_add_key", 00:24:37.846 "req_id": 1 00:24:37.846 } 00:24:37.846 Got JSON-RPC error response 00:24:37.846 response: 00:24:37.846 { 00:24:37.846 "code": -1, 00:24:37.846 "message": "Operation not permitted" 00:24:37.846 } 00:24:37.846 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.105 [2024-11-20 06:35:57.771543] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:38.106 [2024-11-20 06:35:57.771565] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:38.106 request: 00:24:38.106 { 00:24:38.106 "name": "TLSTEST", 00:24:38.106 "trtype": "tcp", 00:24:38.106 "traddr": "10.0.0.2", 00:24:38.106 "adrfam": "ipv4", 00:24:38.106 "trsvcid": "4420", 00:24:38.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.106 "prchk_reftag": false, 00:24:38.106 "prchk_guard": false, 00:24:38.106 "hdgst": false, 00:24:38.106 "ddgst": false, 00:24:38.106 "psk": "key0", 00:24:38.106 "allow_unrecognized_csi": false, 00:24:38.106 "method": "bdev_nvme_attach_controller", 00:24:38.106 "req_id": 1 00:24:38.106 } 00:24:38.106 Got JSON-RPC error response 00:24:38.106 response: 00:24:38.106 { 00:24:38.106 "code": -126, 00:24:38.106 "message": "Required key not available" 00:24:38.106 } 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2734214 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2734214 ']' 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2734214 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2734214 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2734214' 00:24:38.106 killing process with pid 2734214 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2734214 00:24:38.106 Received shutdown signal, test time was about 10.000000 seconds 00:24:38.106 00:24:38.106 Latency(us) 00:24:38.106 [2024-11-20T05:35:58.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.106 [2024-11-20T05:35:58.026Z] =================================================================================================================== 00:24:38.106 [2024-11-20T05:35:58.026Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2734214 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2731500 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2731500 ']' 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2731500 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:38.106 06:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2731500 00:24:38.106 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:38.106 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:38.106 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2731500' 00:24:38.106 killing process with pid 2731500 00:24:38.106 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2731500 00:24:38.106 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2731500 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2734456 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2734456 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2734456 ']' 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.366 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:38.366 [2024-11-20 06:35:58.183586] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:38.366 [2024-11-20 06:35:58.183645] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.366 [2024-11-20 06:35:58.272940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.627 [2024-11-20 06:35:58.302683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.627 [2024-11-20 06:35:58.302712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.627 [2024-11-20 06:35:58.302718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.627 [2024-11-20 06:35:58.302723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.627 [2024-11-20 06:35:58.302727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.627 [2024-11-20 06:35:58.303237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.197 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.197 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:39.197 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.197 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:39.197 06:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.OlsF664L0z 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.OlsF664L0z 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.OlsF664L0z 00:24:39.197 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OlsF664L0z 00:24:39.198 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:39.457 [2024-11-20 06:35:59.168320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.457 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:39.457 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:39.716 [2024-11-20 06:35:59.489111] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.716 [2024-11-20 06:35:59.489303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.716 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:39.975 malloc0 00:24:39.975 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:39.975 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:40.235 [2024-11-20 06:35:59.980182] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OlsF664L0z': 0100666 00:24:40.235 [2024-11-20 06:35:59.980202] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:40.235 request: 00:24:40.235 { 00:24:40.235 "name": "key0", 00:24:40.235 "path": "/tmp/tmp.OlsF664L0z", 00:24:40.235 "method": "keyring_file_add_key", 00:24:40.235 "req_id": 1 00:24:40.235 } 00:24:40.235 Got JSON-RPC error response 00:24:40.235 response: 00:24:40.235 { 00:24:40.235 "code": -1, 00:24:40.235 "message": "Operation not permitted" 00:24:40.235 } 00:24:40.235 06:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.235 [2024-11-20 06:36:00.136589] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:40.235 [2024-11-20 06:36:00.136622] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:40.235 request: 00:24:40.235 { 00:24:40.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.235 "host": "nqn.2016-06.io.spdk:host1", 00:24:40.235 "psk": "key0", 00:24:40.235 "method": "nvmf_subsystem_add_host", 00:24:40.235 "req_id": 1 00:24:40.235 } 00:24:40.235 Got JSON-RPC error response 00:24:40.235 response: 00:24:40.235 { 00:24:40.235 "code": -32603, 00:24:40.235 "message": "Internal error" 00:24:40.235 } 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2734456 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2734456 ']' 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2734456 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2734456 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2734456' 00:24:40.494 killing process with pid 2734456 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2734456 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2734456 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.OlsF664L0z 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2734957 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2734957 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2734957 ']' 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:40.494 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.494 [2024-11-20 06:36:00.371523] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:40.494 [2024-11-20 06:36:00.371564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.754 [2024-11-20 06:36:00.426021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.754 [2024-11-20 06:36:00.454460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.754 [2024-11-20 06:36:00.454490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.754 [2024-11-20 06:36:00.454496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.754 [2024-11-20 06:36:00.454501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.754 [2024-11-20 06:36:00.454505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.754 [2024-11-20 06:36:00.455001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.OlsF664L0z 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OlsF664L0z 00:24:40.754 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:41.014 [2024-11-20 06:36:00.722328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.014 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:41.014 06:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:41.275 [2024-11-20 06:36:01.039105] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.275 [2024-11-20 06:36:01.039299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.275 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:41.536 malloc0 00:24:41.536 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:41.536 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:41.796 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2735338 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2735338 /var/tmp/bdevperf.sock 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2735338 ']' 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:42.055 06:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.055 [2024-11-20 06:36:01.778564] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:42.056 [2024-11-20 06:36:01.778618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735338 ] 00:24:42.056 [2024-11-20 06:36:01.864150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.056 [2024-11-20 06:36:01.893322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.998 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:42.998 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:42.998 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:42.998 06:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:42.998 [2024-11-20 06:36:02.905040] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.259 TLSTESTn1 00:24:43.259 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:43.520 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:43.520 "subsystems": [ 00:24:43.520 { 00:24:43.520 "subsystem": "keyring", 00:24:43.520 "config": [ 00:24:43.520 { 00:24:43.520 "method": "keyring_file_add_key", 00:24:43.520 "params": { 00:24:43.520 "name": "key0", 00:24:43.520 "path": "/tmp/tmp.OlsF664L0z" 00:24:43.520 } 00:24:43.520 } 00:24:43.520 ] 00:24:43.520 }, 00:24:43.520 { 00:24:43.520 "subsystem": "iobuf", 00:24:43.520 "config": [ 00:24:43.520 { 00:24:43.520 "method": "iobuf_set_options", 00:24:43.520 "params": { 00:24:43.520 "small_pool_count": 8192, 00:24:43.520 "large_pool_count": 1024, 00:24:43.520 "small_bufsize": 8192, 00:24:43.520 "large_bufsize": 135168, 00:24:43.520 "enable_numa": false 00:24:43.520 } 00:24:43.520 } 00:24:43.520 ] 00:24:43.520 }, 00:24:43.520 { 00:24:43.520 "subsystem": "sock", 00:24:43.520 "config": [ 00:24:43.520 { 00:24:43.520 "method": "sock_set_default_impl", 00:24:43.520 "params": { 00:24:43.520 "impl_name": "posix" 00:24:43.520 } 00:24:43.520 }, 00:24:43.520 { 00:24:43.520 "method": "sock_impl_set_options", 00:24:43.520 "params": { 00:24:43.520 "impl_name": "ssl", 00:24:43.520 "recv_buf_size": 4096, 00:24:43.520 "send_buf_size": 4096, 00:24:43.520 "enable_recv_pipe": true, 00:24:43.520 "enable_quickack": false, 00:24:43.520 "enable_placement_id": 0, 00:24:43.520 "enable_zerocopy_send_server": true, 00:24:43.520 "enable_zerocopy_send_client": false, 00:24:43.520 "zerocopy_threshold": 0, 00:24:43.520 "tls_version": 0, 00:24:43.520 "enable_ktls": false 00:24:43.520 } 00:24:43.520 }, 00:24:43.520 { 00:24:43.520 "method": "sock_impl_set_options", 00:24:43.520 "params": { 00:24:43.520 "impl_name": "posix", 00:24:43.520 "recv_buf_size": 2097152, 00:24:43.520 "send_buf_size": 2097152, 00:24:43.520 "enable_recv_pipe": true, 00:24:43.520 "enable_quickack": false, 00:24:43.520 "enable_placement_id": 0, 00:24:43.520 "enable_zerocopy_send_server": true, 00:24:43.520 "enable_zerocopy_send_client": false, 00:24:43.520 "zerocopy_threshold": 0, 00:24:43.520 "tls_version": 0, 00:24:43.520 "enable_ktls": false 00:24:43.520 } 00:24:43.520 } 00:24:43.520 ] 00:24:43.520 }, 00:24:43.520 { 00:24:43.520 "subsystem": "vmd", 00:24:43.520 "config": [] 00:24:43.520 }, 00:24:43.520 { 00:24:43.520 "subsystem": "accel", 00:24:43.520 "config": [ 00:24:43.520 { 00:24:43.520 "method": "accel_set_options", 00:24:43.520 "params": { 00:24:43.520 "small_cache_size": 128, 00:24:43.520 "large_cache_size": 16, 00:24:43.520 "task_count": 2048, 00:24:43.520 "sequence_count": 2048, 00:24:43.520 "buf_count": 2048 00:24:43.520 } 00:24:43.520 } 00:24:43.520 ] 00:24:43.520 }, 00:24:43.521 { 00:24:43.521 "subsystem": "bdev", 00:24:43.521 "config": [ 00:24:43.521 { 00:24:43.521 "method": "bdev_set_options", 00:24:43.521 "params": { 00:24:43.521 "bdev_io_pool_size": 65535, 00:24:43.521 "bdev_io_cache_size": 256, 00:24:43.521 "bdev_auto_examine": true, 00:24:43.521 "iobuf_small_cache_size": 128, 00:24:43.521 "iobuf_large_cache_size": 16 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "bdev_raid_set_options", 00:24:43.521 "params": { 00:24:43.521 "process_window_size_kb": 1024, 00:24:43.521 "process_max_bandwidth_mb_sec": 0 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "bdev_iscsi_set_options", 00:24:43.521 "params": { 00:24:43.521 "timeout_sec": 30 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "bdev_nvme_set_options", 00:24:43.521 "params": { 00:24:43.521 "action_on_timeout": "none", 00:24:43.521 "timeout_us": 0, 00:24:43.521 "timeout_admin_us": 0, 00:24:43.521 "keep_alive_timeout_ms": 10000, 00:24:43.521 "arbitration_burst": 0, 00:24:43.521 "low_priority_weight": 0, 00:24:43.521 "medium_priority_weight": 0, 00:24:43.521 "high_priority_weight": 0, 00:24:43.521 "nvme_adminq_poll_period_us": 10000, 00:24:43.521 "nvme_ioq_poll_period_us": 0, 00:24:43.521 "io_queue_requests": 0, 00:24:43.521 "delay_cmd_submit": true, 00:24:43.521 "transport_retry_count": 4, 00:24:43.521 "bdev_retry_count": 3, 00:24:43.521 "transport_ack_timeout": 0, 00:24:43.521 "ctrlr_loss_timeout_sec": 0, 00:24:43.521 "reconnect_delay_sec": 0, 00:24:43.521 "fast_io_fail_timeout_sec": 0, 00:24:43.521 "disable_auto_failback": false, 00:24:43.521 "generate_uuids": false, 00:24:43.521 "transport_tos": 0, 00:24:43.521 "nvme_error_stat": false, 00:24:43.521 "rdma_srq_size": 0, 00:24:43.521 "io_path_stat": false, 00:24:43.521 "allow_accel_sequence": false, 00:24:43.521 "rdma_max_cq_size": 0, 00:24:43.521 "rdma_cm_event_timeout_ms": 0, 00:24:43.521 "dhchap_digests": [ 00:24:43.521 "sha256", 00:24:43.521 "sha384", 00:24:43.521 "sha512" 00:24:43.521 ], 00:24:43.521 "dhchap_dhgroups": [ 00:24:43.521 "null", 00:24:43.521 "ffdhe2048", 00:24:43.521 "ffdhe3072", 00:24:43.521 "ffdhe4096", 00:24:43.521 "ffdhe6144", 00:24:43.521 "ffdhe8192" 00:24:43.521 ] 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "bdev_nvme_set_hotplug", 00:24:43.521 "params": { 00:24:43.521 "period_us": 100000, 00:24:43.521 "enable": false 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "bdev_malloc_create", 00:24:43.521 "params": { 00:24:43.521 "name": "malloc0", 00:24:43.521 "num_blocks": 8192, 00:24:43.521 "block_size": 4096, 00:24:43.521 "physical_block_size": 4096, 00:24:43.521 "uuid": "93332130-56d2-403c-aa63-9d890a35c8f7", 00:24:43.521 "optimal_io_boundary": 0, 00:24:43.521 "md_size": 0, 00:24:43.521 "dif_type": 0, 00:24:43.521 "dif_is_head_of_md": false, 00:24:43.521 "dif_pi_format": 0 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "bdev_wait_for_examine" 00:24:43.521 } 00:24:43.521 ] 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "subsystem": "nbd", 00:24:43.521 "config": [] 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "subsystem": "scheduler", 00:24:43.521 "config": [ 00:24:43.521 { 00:24:43.521 "method": "framework_set_scheduler", 00:24:43.521 "params": { 00:24:43.521 "name": "static" 00:24:43.521 } 00:24:43.521 } 00:24:43.521 ] 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "subsystem": "nvmf", 00:24:43.521 "config": [ 00:24:43.521 { 00:24:43.521 "method": "nvmf_set_config", 00:24:43.521 "params": { 00:24:43.521 "discovery_filter": "match_any", 00:24:43.521 "admin_cmd_passthru": { 00:24:43.521 "identify_ctrlr": false 00:24:43.521 }, 00:24:43.521 "dhchap_digests": [ 00:24:43.521 "sha256", 00:24:43.521 "sha384", 00:24:43.521 "sha512" 00:24:43.521 ], 00:24:43.521 "dhchap_dhgroups": [ 00:24:43.521 "null", 00:24:43.521 "ffdhe2048", 00:24:43.521 "ffdhe3072", 00:24:43.521 "ffdhe4096", 00:24:43.521 "ffdhe6144", 00:24:43.521 "ffdhe8192" 00:24:43.521 ] 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "nvmf_set_max_subsystems", 00:24:43.521 "params": { 00:24:43.521 "max_subsystems": 1024 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "nvmf_set_crdt", 00:24:43.521 "params": { 00:24:43.521 "crdt1": 0, 00:24:43.521 "crdt2": 0, 00:24:43.521 "crdt3": 0 00:24:43.521 } 00:24:43.521 }, 00:24:43.521 { 00:24:43.521 "method": "nvmf_create_transport", 00:24:43.521 "params": { 00:24:43.521 "trtype": "TCP", 00:24:43.521 "max_queue_depth": 128, 00:24:43.521 "max_io_qpairs_per_ctrlr": 127, 00:24:43.521 "in_capsule_data_size": 4096, 00:24:43.521 "max_io_size": 131072, 00:24:43.521 "io_unit_size": 131072, 00:24:43.521 "max_aq_depth": 128, 00:24:43.521 "num_shared_buffers": 511, 00:24:43.521 "buf_cache_size": 4294967295, 00:24:43.521 "dif_insert_or_strip": false, 00:24:43.522 "zcopy": false, 00:24:43.522 "c2h_success": false, 00:24:43.522 "sock_priority": 0, 00:24:43.522 "abort_timeout_sec": 1, 00:24:43.522 "ack_timeout": 0, 00:24:43.522 "data_wr_pool_size": 0 00:24:43.522 } 00:24:43.522 }, 00:24:43.522 { 00:24:43.522 "method": "nvmf_create_subsystem", 00:24:43.522 "params": { 00:24:43.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.522 "allow_any_host": false, 00:24:43.522 "serial_number": "SPDK00000000000001", 00:24:43.522 "model_number": "SPDK bdev Controller", 00:24:43.522 "max_namespaces": 10, 00:24:43.522 "min_cntlid": 1, 00:24:43.522 "max_cntlid": 65519, 00:24:43.522 "ana_reporting": false 00:24:43.522 } 00:24:43.522 }, 00:24:43.522 { 00:24:43.522 "method": "nvmf_subsystem_add_host", 00:24:43.522 "params": { 00:24:43.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.522 "host": "nqn.2016-06.io.spdk:host1", 00:24:43.522 "psk": "key0" 00:24:43.522 } 00:24:43.522 }, 00:24:43.522 { 00:24:43.522 "method": "nvmf_subsystem_add_ns", 00:24:43.522 "params": { 00:24:43.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.522 "namespace": { 00:24:43.522 "nsid": 1, 00:24:43.522 "bdev_name": "malloc0", 00:24:43.522 "nguid": "9333213056D2403CAA639D890A35C8F7", 00:24:43.522 "uuid": "93332130-56d2-403c-aa63-9d890a35c8f7", 00:24:43.522 "no_auto_visible": false 00:24:43.522 } 00:24:43.522 } 00:24:43.522 }, 00:24:43.522 { 00:24:43.522 "method": "nvmf_subsystem_add_listener", 00:24:43.522 "params": { 00:24:43.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.522 "listen_address": { 00:24:43.522 "trtype": "TCP", 00:24:43.522 "adrfam": "IPv4", 00:24:43.522 "traddr": "10.0.0.2", 00:24:43.522 "trsvcid": "4420" 00:24:43.522 }, 00:24:43.522 "secure_channel": true 00:24:43.522 } 00:24:43.522 } 00:24:43.522 ] 00:24:43.522 } 00:24:43.522 ] 00:24:43.522 }' 00:24:43.522 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:43.784 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:43.784 "subsystems": [ 00:24:43.784 { 00:24:43.784 "subsystem": "keyring", 00:24:43.784 "config": [ 00:24:43.784 { 00:24:43.784 "method": "keyring_file_add_key", 00:24:43.784 "params": { 00:24:43.784 "name": "key0", 00:24:43.784 "path": "/tmp/tmp.OlsF664L0z" 00:24:43.784 } 00:24:43.784 } 00:24:43.784 ] 00:24:43.784 }, 00:24:43.784 { 00:24:43.784 "subsystem": "iobuf", 00:24:43.784 "config": [ 00:24:43.784 { 00:24:43.784 "method": "iobuf_set_options", 00:24:43.784 "params": { 00:24:43.784 "small_pool_count": 8192, 00:24:43.784 "large_pool_count": 1024, 00:24:43.784 "small_bufsize": 8192, 00:24:43.784 "large_bufsize": 135168, 00:24:43.784 "enable_numa": false 00:24:43.784 } 00:24:43.784 } 00:24:43.784 ] 00:24:43.784 }, 00:24:43.784 { 00:24:43.784 "subsystem": "sock", 00:24:43.784 "config": [ 00:24:43.784 { 00:24:43.784 "method": "sock_set_default_impl", 00:24:43.784 "params": { 00:24:43.784 "impl_name": "posix" 00:24:43.784 } 00:24:43.784 }, 00:24:43.784 { 00:24:43.784 "method": "sock_impl_set_options", 00:24:43.784 "params": { 00:24:43.784 "impl_name": "ssl", 00:24:43.784 "recv_buf_size": 4096, 00:24:43.784 "send_buf_size": 4096, 00:24:43.784 "enable_recv_pipe": true, 00:24:43.784 "enable_quickack": false, 00:24:43.784 "enable_placement_id": 0, 00:24:43.784 "enable_zerocopy_send_server": true, 00:24:43.784 "enable_zerocopy_send_client": false, 00:24:43.784 "zerocopy_threshold": 0, 00:24:43.785 "tls_version": 0, 00:24:43.785 "enable_ktls": false 00:24:43.785 } 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "method": "sock_impl_set_options", 00:24:43.785 "params": { 00:24:43.785 "impl_name": "posix", 00:24:43.785 "recv_buf_size": 2097152, 00:24:43.785 "send_buf_size": 2097152, 00:24:43.785 "enable_recv_pipe": true, 00:24:43.785 "enable_quickack": false, 00:24:43.785 "enable_placement_id": 0, 00:24:43.785 "enable_zerocopy_send_server": true, 00:24:43.785 "enable_zerocopy_send_client": false, 00:24:43.785 "zerocopy_threshold": 0, 00:24:43.785 "tls_version": 0, 00:24:43.785 "enable_ktls": false 00:24:43.785 } 00:24:43.785 } 00:24:43.785 ] 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "subsystem": "vmd", 00:24:43.785 "config": [] 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "subsystem": "accel", 00:24:43.785 "config": [ 00:24:43.785 { 00:24:43.785 "method": "accel_set_options", 00:24:43.785 "params": { 00:24:43.785 "small_cache_size": 128, 00:24:43.785 "large_cache_size": 16, 00:24:43.785 "task_count": 2048, 00:24:43.785 "sequence_count": 2048, 00:24:43.785 "buf_count": 2048 00:24:43.785 } 00:24:43.785 } 00:24:43.785 ] 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "subsystem": "bdev", 00:24:43.785 "config": [ 00:24:43.785 { 00:24:43.785 "method": "bdev_set_options", 00:24:43.785 "params": { 00:24:43.785 "bdev_io_pool_size": 65535, 00:24:43.785 "bdev_io_cache_size": 256, 00:24:43.785 "bdev_auto_examine": true, 00:24:43.785 "iobuf_small_cache_size": 128, 00:24:43.785 "iobuf_large_cache_size": 16 00:24:43.785 } 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "method": "bdev_raid_set_options", 00:24:43.785 "params": { 00:24:43.785 "process_window_size_kb": 1024, 00:24:43.785 "process_max_bandwidth_mb_sec": 0 00:24:43.785 } 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "method": "bdev_iscsi_set_options", 00:24:43.785 "params": { 00:24:43.785 "timeout_sec": 30 00:24:43.785 } 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "method": "bdev_nvme_set_options", 00:24:43.785 "params": { 00:24:43.785 "action_on_timeout": "none", 00:24:43.785 "timeout_us": 0, 00:24:43.785 "timeout_admin_us": 0, 00:24:43.785 "keep_alive_timeout_ms": 10000, 00:24:43.785 "arbitration_burst": 0, 00:24:43.785 "low_priority_weight": 0, 00:24:43.785 "medium_priority_weight": 0, 00:24:43.785 "high_priority_weight": 0, 00:24:43.785 "nvme_adminq_poll_period_us": 10000, 00:24:43.785 "nvme_ioq_poll_period_us": 0, 00:24:43.785 "io_queue_requests": 512, 00:24:43.785 "delay_cmd_submit": true, 00:24:43.785 "transport_retry_count": 4, 00:24:43.785 "bdev_retry_count": 3, 00:24:43.785 "transport_ack_timeout": 0, 00:24:43.785 "ctrlr_loss_timeout_sec": 0, 00:24:43.785 "reconnect_delay_sec": 0, 00:24:43.785 "fast_io_fail_timeout_sec": 0, 00:24:43.785 "disable_auto_failback": false, 00:24:43.785 "generate_uuids": false, 00:24:43.785 "transport_tos": 0, 00:24:43.785 "nvme_error_stat": false, 00:24:43.785 "rdma_srq_size": 0, 00:24:43.785 "io_path_stat": false, 00:24:43.785 "allow_accel_sequence": false, 00:24:43.785 "rdma_max_cq_size": 0, 00:24:43.785 "rdma_cm_event_timeout_ms": 0, 00:24:43.785 "dhchap_digests": [ 00:24:43.785 "sha256", 00:24:43.785 "sha384", 00:24:43.785 "sha512" 00:24:43.785 ], 00:24:43.785 "dhchap_dhgroups": [ 00:24:43.785 "null", 00:24:43.785 "ffdhe2048", 00:24:43.785 "ffdhe3072", 00:24:43.785 "ffdhe4096", 00:24:43.785 "ffdhe6144", 00:24:43.785 "ffdhe8192" 00:24:43.785 ] 00:24:43.785 } 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "method": "bdev_nvme_attach_controller", 00:24:43.785 "params": { 00:24:43.785 "name": "TLSTEST", 00:24:43.785 "trtype": "TCP", 00:24:43.785 "adrfam": "IPv4", 00:24:43.785 "traddr": "10.0.0.2", 00:24:43.785 "trsvcid": "4420", 00:24:43.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.785 "prchk_reftag": false, 00:24:43.785 "prchk_guard": false, 00:24:43.785 "ctrlr_loss_timeout_sec": 0, 00:24:43.785 "reconnect_delay_sec": 0, 00:24:43.785 "fast_io_fail_timeout_sec": 0, 00:24:43.785 "psk": "key0", 00:24:43.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.785 "hdgst": false, 00:24:43.785 "ddgst": false, 00:24:43.785 "multipath": "multipath" 00:24:43.785 } 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "method": "bdev_nvme_set_hotplug", 00:24:43.785 "params": { 00:24:43.785 "period_us": 100000, 00:24:43.785 "enable": false 00:24:43.785 } 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "method": "bdev_wait_for_examine" 00:24:43.785 } 00:24:43.785 ] 00:24:43.785 }, 00:24:43.785 { 00:24:43.785 "subsystem": "nbd", 00:24:43.785 "config": [] 00:24:43.785 } 00:24:43.785 ] 00:24:43.785 }' 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2735338 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2735338 ']' 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2735338 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2735338 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2735338' 00:24:43.785 killing process with pid 2735338 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2735338 00:24:43.785 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.785 00:24:43.785 Latency(us) 00:24:43.785 [2024-11-20T05:36:03.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.785 [2024-11-20T05:36:03.705Z] =================================================================================================================== 00:24:43.785 [2024-11-20T05:36:03.705Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2735338 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2734957 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2734957 ']' 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2734957 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:43.785 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2734957 00:24:44.046 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2734957' 00:24:44.047 killing process with pid 2734957 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2734957 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2734957 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.047 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:44.047 "subsystems": [ 00:24:44.047 { 00:24:44.047 "subsystem": "keyring", 00:24:44.047 "config": [ 00:24:44.047 { 00:24:44.047 "method": "keyring_file_add_key", 00:24:44.047 "params": { 00:24:44.047 "name": "key0", 00:24:44.047 "path": "/tmp/tmp.OlsF664L0z" 00:24:44.047 } 00:24:44.047 } 00:24:44.047 ] 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "subsystem": "iobuf", 00:24:44.047 "config": [ 00:24:44.047 { 00:24:44.047 "method": "iobuf_set_options", 00:24:44.047 "params": { 00:24:44.047 "small_pool_count": 8192, 00:24:44.047 "large_pool_count": 1024, 00:24:44.047 "small_bufsize": 8192, 00:24:44.047 "large_bufsize": 135168, 00:24:44.047 "enable_numa": false 00:24:44.047 } 00:24:44.047 } 00:24:44.047 ] 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "subsystem": "sock", 00:24:44.047 "config": [ 00:24:44.047 { 00:24:44.047 "method": "sock_set_default_impl", 00:24:44.047 "params": { 00:24:44.047 "impl_name": "posix" 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "sock_impl_set_options", 00:24:44.047 "params": { 00:24:44.047 "impl_name": "ssl", 00:24:44.047 "recv_buf_size": 4096, 00:24:44.047 "send_buf_size": 4096, 00:24:44.047 "enable_recv_pipe": true, 00:24:44.047 "enable_quickack": false, 00:24:44.047 "enable_placement_id": 0, 00:24:44.047 "enable_zerocopy_send_server": true, 00:24:44.047 "enable_zerocopy_send_client": false, 00:24:44.047 "zerocopy_threshold": 0, 00:24:44.047 "tls_version": 0, 00:24:44.047 "enable_ktls": false 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "sock_impl_set_options", 00:24:44.047 "params": { 00:24:44.047 "impl_name": "posix", 00:24:44.047 "recv_buf_size": 2097152, 00:24:44.047 "send_buf_size": 2097152, 00:24:44.047 "enable_recv_pipe": true, 00:24:44.047 "enable_quickack": false, 00:24:44.047 "enable_placement_id": 0, 00:24:44.047 "enable_zerocopy_send_server": true, 00:24:44.047 "enable_zerocopy_send_client": false, 00:24:44.047 "zerocopy_threshold": 0, 00:24:44.047 "tls_version": 0, 00:24:44.047 "enable_ktls": false 00:24:44.047 } 00:24:44.047 } 00:24:44.047 ] 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "subsystem": "vmd", 00:24:44.047 "config": [] 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "subsystem": "accel", 00:24:44.047 "config": [ 00:24:44.047 { 00:24:44.047 "method": "accel_set_options", 00:24:44.047 "params": { 00:24:44.047 "small_cache_size": 128, 00:24:44.047 "large_cache_size": 16, 00:24:44.047 "task_count": 2048, 00:24:44.047 "sequence_count": 2048, 00:24:44.047 "buf_count": 2048 00:24:44.047 } 00:24:44.047 } 00:24:44.047 ] 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "subsystem": "bdev", 00:24:44.047 "config": [ 00:24:44.047 { 00:24:44.047 "method": "bdev_set_options", 00:24:44.047 "params": { 00:24:44.047 "bdev_io_pool_size": 65535, 00:24:44.047 "bdev_io_cache_size": 256, 00:24:44.047 "bdev_auto_examine": true, 00:24:44.047 "iobuf_small_cache_size": 128, 00:24:44.047 "iobuf_large_cache_size": 16 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "bdev_raid_set_options", 00:24:44.047 "params": { 00:24:44.047 "process_window_size_kb": 1024, 00:24:44.047 "process_max_bandwidth_mb_sec": 0 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "bdev_iscsi_set_options", 00:24:44.047 "params": { 00:24:44.047 "timeout_sec": 30 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "bdev_nvme_set_options", 00:24:44.047 "params": { 00:24:44.047 "action_on_timeout": "none", 00:24:44.047 "timeout_us": 0, 00:24:44.047 "timeout_admin_us": 0, 00:24:44.047 "keep_alive_timeout_ms": 10000, 00:24:44.047 "arbitration_burst": 0, 00:24:44.047 "low_priority_weight": 0, 00:24:44.047 "medium_priority_weight": 0, 00:24:44.047 "high_priority_weight": 0, 00:24:44.047 "nvme_adminq_poll_period_us": 10000, 00:24:44.047 "nvme_ioq_poll_period_us": 0, 00:24:44.047 "io_queue_requests": 0, 00:24:44.047 "delay_cmd_submit": true, 00:24:44.047 "transport_retry_count": 4, 00:24:44.047 "bdev_retry_count": 3, 00:24:44.047 "transport_ack_timeout": 0, 00:24:44.047 "ctrlr_loss_timeout_sec": 0, 00:24:44.047 "reconnect_delay_sec": 0, 00:24:44.047 "fast_io_fail_timeout_sec": 0, 00:24:44.047 "disable_auto_failback": false, 00:24:44.047 "generate_uuids": false, 00:24:44.047 "transport_tos": 0, 00:24:44.047 "nvme_error_stat": false, 00:24:44.047 "rdma_srq_size": 0, 00:24:44.047 "io_path_stat": false, 00:24:44.047 "allow_accel_sequence": false, 00:24:44.047 "rdma_max_cq_size": 0, 00:24:44.047 "rdma_cm_event_timeout_ms": 0, 00:24:44.047 "dhchap_digests": [ 00:24:44.047 "sha256", 00:24:44.047 "sha384", 00:24:44.047 "sha512" 00:24:44.047 ], 00:24:44.047 "dhchap_dhgroups": [ 00:24:44.047 "null", 00:24:44.047 "ffdhe2048", 00:24:44.047 "ffdhe3072", 00:24:44.047 "ffdhe4096", 00:24:44.047 "ffdhe6144", 00:24:44.047 "ffdhe8192" 00:24:44.047 ] 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "bdev_nvme_set_hotplug", 00:24:44.047 "params": { 00:24:44.047 "period_us": 100000, 00:24:44.047 "enable": false 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "bdev_malloc_create", 00:24:44.047 "params": { 00:24:44.047 "name": "malloc0", 00:24:44.047 "num_blocks": 8192, 00:24:44.047 "block_size": 4096, 00:24:44.047 "physical_block_size": 4096, 00:24:44.047 "uuid": "93332130-56d2-403c-aa63-9d890a35c8f7", 00:24:44.047 "optimal_io_boundary": 0, 00:24:44.047 "md_size": 0, 00:24:44.047 "dif_type": 0, 00:24:44.047 "dif_is_head_of_md": false, 00:24:44.047 "dif_pi_format": 0 00:24:44.047 } 00:24:44.047 }, 00:24:44.047 { 00:24:44.047 "method": "bdev_wait_for_examine" 00:24:44.047 } 00:24:44.047 ] 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "subsystem": "nbd", 00:24:44.048 "config": [] 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "subsystem": "scheduler", 00:24:44.048 "config": [ 00:24:44.048 { 00:24:44.048 "method": "framework_set_scheduler", 00:24:44.048 "params": { 00:24:44.048 "name": "static" 00:24:44.048 } 00:24:44.048 } 00:24:44.048 ] 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "subsystem": "nvmf", 00:24:44.048 "config": [ 00:24:44.048 { 00:24:44.048 "method": "nvmf_set_config", 00:24:44.048 "params": { 00:24:44.048 "discovery_filter": "match_any", 00:24:44.048 "admin_cmd_passthru": { 00:24:44.048 "identify_ctrlr": false 00:24:44.048 }, 00:24:44.048 "dhchap_digests": [ 00:24:44.048 "sha256", 00:24:44.048 "sha384", 00:24:44.048 "sha512" 00:24:44.048 ], 00:24:44.048 "dhchap_dhgroups": [ 00:24:44.048 "null", 00:24:44.048 "ffdhe2048", 00:24:44.048 "ffdhe3072", 00:24:44.048 "ffdhe4096", 00:24:44.048 "ffdhe6144", 00:24:44.048 "ffdhe8192" 00:24:44.048 ] 00:24:44.048 } 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "method": "nvmf_set_max_subsystems", 00:24:44.048 "params": { 00:24:44.048 "max_subsystems": 1024 00:24:44.048 } 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "method": "nvmf_set_crdt", 00:24:44.048 "params": { 00:24:44.048 "crdt1": 0, 00:24:44.048 "crdt2": 0, 00:24:44.048 "crdt3": 0 00:24:44.048 } 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "method": "nvmf_create_transport", 00:24:44.048 "params": { 00:24:44.048 "trtype": "TCP", 00:24:44.048 "max_queue_depth": 128, 00:24:44.048 "max_io_qpairs_per_ctrlr": 127, 00:24:44.048 "in_capsule_data_size": 4096, 00:24:44.048 "max_io_size": 131072, 00:24:44.048 "io_unit_size": 131072, 00:24:44.048 "max_aq_depth": 128, 00:24:44.048 "num_shared_buffers": 511, 00:24:44.048 "buf_cache_size": 4294967295, 00:24:44.048 "dif_insert_or_strip": false, 00:24:44.048 "zcopy": false, 00:24:44.048 "c2h_success": false, 00:24:44.048 "sock_priority": 0, 00:24:44.048 "abort_timeout_sec": 1, 00:24:44.048 "ack_timeout": 0, 00:24:44.048 "data_wr_pool_size": 0 00:24:44.048 } 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "method": "nvmf_create_subsystem", 00:24:44.048 "params": { 00:24:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.048 "allow_any_host": false, 00:24:44.048 "serial_number": "SPDK00000000000001", 00:24:44.048 "model_number": "SPDK bdev Controller", 00:24:44.048 "max_namespaces": 10, 00:24:44.048 "min_cntlid": 1, 00:24:44.048 "max_cntlid": 65519, 00:24:44.048 "ana_reporting": false 00:24:44.048 } 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "method": "nvmf_subsystem_add_host", 00:24:44.048 "params": { 00:24:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.048 "host": "nqn.2016-06.io.spdk:host1", 00:24:44.048 "psk": "key0" 00:24:44.048 } 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "method": "nvmf_subsystem_add_ns", 00:24:44.048 "params": { 00:24:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.048 "namespace": { 00:24:44.048 "nsid": 1, 00:24:44.048 "bdev_name": "malloc0", 00:24:44.048 "nguid": "9333213056D2403CAA639D890A35C8F7", 00:24:44.048 "uuid": "93332130-56d2-403c-aa63-9d890a35c8f7", 00:24:44.048 "no_auto_visible": false 00:24:44.048 } 00:24:44.048 } 00:24:44.048 }, 00:24:44.048 { 00:24:44.048 "method": "nvmf_subsystem_add_listener", 00:24:44.048 "params": { 00:24:44.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.048 "listen_address": { 00:24:44.048 "trtype": "TCP", 00:24:44.048 "adrfam": "IPv4", 00:24:44.048 "traddr": "10.0.0.2", 00:24:44.048 "trsvcid": "4420" 00:24:44.048 }, 00:24:44.048 "secure_channel": true 00:24:44.048 } 00:24:44.048 } 00:24:44.048 ] 00:24:44.048 } 00:24:44.048 ] 00:24:44.048 }' 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2735756 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2735756 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2735756 ']' 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:44.048 06:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.048 [2024-11-20 06:36:03.886031] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:44.048 [2024-11-20 06:36:03.886073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.048 [2024-11-20 06:36:03.943633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.308 [2024-11-20 06:36:03.972087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.308 [2024-11-20 06:36:03.972114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.308 [2024-11-20 06:36:03.972120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.308 [2024-11-20 06:36:03.972125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.308 [2024-11-20 06:36:03.972129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.308 [2024-11-20 06:36:03.972652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.308 [2024-11-20 06:36:04.166736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.308 [2024-11-20 06:36:04.198767] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.308 [2024-11-20 06:36:04.198953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2735837 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2735837 /var/tmp/bdevperf.sock 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2735837 ']' 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.879 06:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:44.879 "subsystems": [ 00:24:44.879 { 00:24:44.879 "subsystem": "keyring", 00:24:44.879 "config": [ 00:24:44.879 { 00:24:44.879 "method": "keyring_file_add_key", 00:24:44.879 "params": { 00:24:44.879 "name": "key0", 00:24:44.879 "path": "/tmp/tmp.OlsF664L0z" 00:24:44.879 } 00:24:44.879 } 00:24:44.879 ] 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "subsystem": "iobuf", 00:24:44.879 "config": [ 00:24:44.879 { 00:24:44.879 "method": "iobuf_set_options", 00:24:44.879 "params": { 00:24:44.879 "small_pool_count": 8192, 00:24:44.879 "large_pool_count": 1024, 00:24:44.879 "small_bufsize": 8192, 00:24:44.879 "large_bufsize": 135168, 00:24:44.879 "enable_numa": false 00:24:44.879 } 00:24:44.879 } 00:24:44.879 ] 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "subsystem": "sock", 00:24:44.879 "config": [ 00:24:44.879 { 00:24:44.879 "method": "sock_set_default_impl", 00:24:44.879 "params": { 00:24:44.879 "impl_name": "posix" 00:24:44.879 } 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "method": "sock_impl_set_options", 00:24:44.879 "params": { 00:24:44.879 "impl_name": "ssl", 00:24:44.879 "recv_buf_size": 4096, 00:24:44.879 "send_buf_size": 4096, 00:24:44.879 "enable_recv_pipe": true, 00:24:44.879 "enable_quickack": false, 00:24:44.879 "enable_placement_id": 0, 00:24:44.879 "enable_zerocopy_send_server": true, 00:24:44.879 "enable_zerocopy_send_client": false, 00:24:44.879 "zerocopy_threshold": 0, 00:24:44.879 "tls_version": 0, 00:24:44.879 "enable_ktls": false 00:24:44.879 } 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "method": "sock_impl_set_options", 00:24:44.879 "params": { 00:24:44.879 "impl_name": "posix", 00:24:44.879 "recv_buf_size": 2097152, 00:24:44.879 "send_buf_size": 2097152, 00:24:44.879 "enable_recv_pipe": true, 00:24:44.879 "enable_quickack": false, 00:24:44.879 "enable_placement_id": 0, 00:24:44.879 "enable_zerocopy_send_server": true, 00:24:44.879 "enable_zerocopy_send_client": false, 00:24:44.879 "zerocopy_threshold": 0, 00:24:44.879 "tls_version": 0, 00:24:44.879 "enable_ktls": false 00:24:44.879 } 00:24:44.879 } 00:24:44.879 ] 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "subsystem": "vmd", 00:24:44.879 "config": [] 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "subsystem": "accel", 00:24:44.879 "config": [ 00:24:44.879 { 00:24:44.879 "method": "accel_set_options", 00:24:44.879 "params": { 00:24:44.879 "small_cache_size": 128, 00:24:44.879 "large_cache_size": 16, 00:24:44.879 "task_count": 2048, 00:24:44.879 "sequence_count": 2048, 00:24:44.879 "buf_count": 2048 00:24:44.879 } 00:24:44.879 } 00:24:44.879 ] 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "subsystem": "bdev", 00:24:44.879 "config": [ 00:24:44.879 { 00:24:44.879 "method": "bdev_set_options", 00:24:44.879 "params": { 00:24:44.879 "bdev_io_pool_size": 65535, 00:24:44.879 "bdev_io_cache_size": 256, 00:24:44.879 "bdev_auto_examine": true, 00:24:44.879 "iobuf_small_cache_size": 128, 00:24:44.879 "iobuf_large_cache_size": 16 00:24:44.879 } 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "method": "bdev_raid_set_options", 00:24:44.879 "params": { 00:24:44.879 "process_window_size_kb": 1024, 00:24:44.879 "process_max_bandwidth_mb_sec": 0 00:24:44.879 } 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "method": "bdev_iscsi_set_options", 00:24:44.879 "params": { 00:24:44.879 "timeout_sec": 30 00:24:44.879 } 00:24:44.879 }, 00:24:44.879 { 00:24:44.879 "method": "bdev_nvme_set_options", 00:24:44.879 "params": { 00:24:44.879 "action_on_timeout": "none", 00:24:44.879 "timeout_us": 0, 00:24:44.879 "timeout_admin_us": 0, 00:24:44.879 "keep_alive_timeout_ms": 10000, 00:24:44.879 "arbitration_burst": 0, 00:24:44.879 "low_priority_weight": 0, 00:24:44.880 "medium_priority_weight": 0, 00:24:44.880 "high_priority_weight": 0, 00:24:44.880 "nvme_adminq_poll_period_us": 10000, 00:24:44.880 "nvme_ioq_poll_period_us": 0, 00:24:44.880 "io_queue_requests": 512, 00:24:44.880 "delay_cmd_submit": true, 00:24:44.880 "transport_retry_count": 4, 00:24:44.880 "bdev_retry_count": 3, 00:24:44.880 "transport_ack_timeout": 0, 00:24:44.880 "ctrlr_loss_timeout_sec": 0, 00:24:44.880 "reconnect_delay_sec": 0, 00:24:44.880 "fast_io_fail_timeout_sec": 0, 00:24:44.880 "disable_auto_failback": false, 00:24:44.880 "generate_uuids": false, 00:24:44.880 "transport_tos": 0, 00:24:44.880 "nvme_error_stat": false, 00:24:44.880 "rdma_srq_size": 0, 00:24:44.880 "io_path_stat": false, 00:24:44.880 "allow_accel_sequence": false, 00:24:44.880 "rdma_max_cq_size": 0, 00:24:44.880 "rdma_cm_event_timeout_ms": 0, 00:24:44.880 "dhchap_digests": [ 00:24:44.880 "sha256", 00:24:44.880 "sha384", 00:24:44.880 "sha512" 00:24:44.880 ], 00:24:44.880 "dhchap_dhgroups": [ 00:24:44.880 "null", 00:24:44.880 "ffdhe2048", 00:24:44.880 "ffdhe3072", 00:24:44.880 "ffdhe4096", 00:24:44.880 "ffdhe6144", 00:24:44.880 "ffdhe8192" 00:24:44.880 ] 00:24:44.880 } 00:24:44.880 }, 00:24:44.880 { 00:24:44.880 "method": "bdev_nvme_attach_controller", 00:24:44.880 "params": { 00:24:44.880 "name": "TLSTEST", 00:24:44.880 "trtype": "TCP", 00:24:44.880 "adrfam": "IPv4", 00:24:44.880 "traddr": "10.0.0.2", 00:24:44.880 "trsvcid": "4420", 00:24:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.880 "prchk_reftag": false, 00:24:44.880 "prchk_guard": false, 00:24:44.880 "ctrlr_loss_timeout_sec": 0, 00:24:44.880 "reconnect_delay_sec": 0, 00:24:44.880 "fast_io_fail_timeout_sec": 0, 00:24:44.880 "psk": "key0", 00:24:44.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:44.880 "hdgst": false, 00:24:44.880 "ddgst": false, 00:24:44.880 "multipath": "multipath" 00:24:44.880 } 00:24:44.880 }, 00:24:44.880 { 00:24:44.880 "method": "bdev_nvme_set_hotplug", 00:24:44.880 "params": { 00:24:44.880 "period_us": 100000, 00:24:44.880 "enable": false 00:24:44.880 } 00:24:44.880 }, 00:24:44.880 { 00:24:44.880 "method": "bdev_wait_for_examine" 00:24:44.880 } 00:24:44.880 ] 00:24:44.880 }, 00:24:44.880 { 00:24:44.880 "subsystem": "nbd", 00:24:44.880 "config": [] 00:24:44.880 } 00:24:44.880 ] 00:24:44.880 }' 00:24:44.880 [2024-11-20 06:36:04.773725] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:44.880 [2024-11-20 06:36:04.773784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735837 ] 00:24:45.142 [2024-11-20 06:36:04.832976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.142 [2024-11-20 06:36:04.862134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.142 [2024-11-20 06:36:04.997522] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.714 06:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:45.714 06:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:45.714 06:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:45.975 Running I/O for 10 seconds... 00:24:47.857 5200.00 IOPS, 20.31 MiB/s [2024-11-20T05:36:08.718Z] 5018.00 IOPS, 19.60 MiB/s [2024-11-20T05:36:10.102Z] 5000.67 IOPS, 19.53 MiB/s [2024-11-20T05:36:10.674Z] 5173.00 IOPS, 20.21 MiB/s [2024-11-20T05:36:12.058Z] 5273.40 IOPS, 20.60 MiB/s [2024-11-20T05:36:13.000Z] 5195.00 IOPS, 20.29 MiB/s [2024-11-20T05:36:13.941Z] 5274.71 IOPS, 20.60 MiB/s [2024-11-20T05:36:14.882Z] 5248.12 IOPS, 20.50 MiB/s [2024-11-20T05:36:15.823Z] 5351.00 IOPS, 20.90 MiB/s [2024-11-20T05:36:15.823Z] 5332.90 IOPS, 20.83 MiB/s 00:24:55.903 Latency(us) 00:24:55.903 [2024-11-20T05:36:15.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.903 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:55.903 Verification LBA range: start 0x0 length 0x2000 00:24:55.903 TLSTESTn1 : 10.03 5328.03 20.81 0.00 0.00 23976.42 5515.95 31894.19 00:24:55.903 [2024-11-20T05:36:15.824Z] =================================================================================================================== 00:24:55.904 [2024-11-20T05:36:15.824Z] Total : 5328.03 20.81 0.00 0.00 23976.42 5515.95 31894.19 00:24:55.904 { 00:24:55.904 "results": [ 00:24:55.904 { 00:24:55.904 "job": "TLSTESTn1", 00:24:55.904 "core_mask": "0x4", 00:24:55.904 "workload": "verify", 00:24:55.904 "status": "finished", 00:24:55.904 "verify_range": { 00:24:55.904 "start": 0, 00:24:55.904 "length": 8192 00:24:55.904 }, 00:24:55.904 "queue_depth": 128, 00:24:55.904 "io_size": 4096, 00:24:55.904 "runtime": 10.032781, 00:24:55.904 "iops": 5328.0341711834435, 00:24:55.904 "mibps": 20.812633481185326, 00:24:55.904 "io_failed": 0, 00:24:55.904 "io_timeout": 0, 00:24:55.904 "avg_latency_us": 23976.415482181274, 00:24:55.904 "min_latency_us": 5515.946666666667, 00:24:55.904 "max_latency_us": 31894.18666666667 00:24:55.904 } 00:24:55.904 ], 00:24:55.904 "core_count": 1 00:24:55.904 } 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2735837 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2735837 ']' 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2735837 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2735837 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2735837' 00:24:55.904 killing process with pid 2735837 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2735837 00:24:55.904 Received shutdown signal, test time was about 10.000000 seconds 00:24:55.904 00:24:55.904 Latency(us) 00:24:55.904 [2024-11-20T05:36:15.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.904 [2024-11-20T05:36:15.824Z] =================================================================================================================== 00:24:55.904 [2024-11-20T05:36:15.824Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.904 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2735837 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2735756 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2735756 ']' 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2735756 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2735756 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2735756' 00:24:56.164 killing process with pid 2735756 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2735756 00:24:56.164 06:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2735756 00:24:56.164 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:56.164 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.164 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.164 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2738589 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2738589 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2738589 ']' 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.425 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.425 [2024-11-20 06:36:16.132931] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:56.425 [2024-11-20 06:36:16.132975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.425 [2024-11-20 06:36:16.218351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.425 [2024-11-20 06:36:16.255313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.425 [2024-11-20 06:36:16.255352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.425 [2024-11-20 06:36:16.255360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.425 [2024-11-20 06:36:16.255367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.425 [2024-11-20 06:36:16.255373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.425 [2024-11-20 06:36:16.256043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.OlsF664L0z 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OlsF664L0z 00:24:57.367 06:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:57.367 [2024-11-20 06:36:17.140299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.367 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:57.628 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:57.628 [2024-11-20 06:36:17.437025] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.628 [2024-11-20 06:36:17.437240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.628 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:57.888 malloc0 00:24:57.888 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:57.888 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:58.148 06:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2738956 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2738956 /var/tmp/bdevperf.sock 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2738956 ']' 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.408 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:58.409 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.409 [2024-11-20 06:36:18.138311] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:58.409 [2024-11-20 06:36:18.138353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738956 ] 00:24:58.409 [2024-11-20 06:36:18.216599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.409 [2024-11-20 06:36:18.246389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.409 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:58.409 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:58.409 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:24:58.669 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:58.929 [2024-11-20 06:36:18.641956] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:58.929 nvme0n1 00:24:58.929 06:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:58.929 Running I/O for 1 seconds... 00:25:00.312 5389.00 IOPS, 21.05 MiB/s 00:25:00.312 Latency(us) 00:25:00.312 [2024-11-20T05:36:20.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.312 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:00.312 Verification LBA range: start 0x0 length 0x2000 00:25:00.312 nvme0n1 : 1.02 5423.72 21.19 0.00 0.00 23441.65 6062.08 28398.93 00:25:00.312 [2024-11-20T05:36:20.232Z] =================================================================================================================== 00:25:00.312 [2024-11-20T05:36:20.232Z] Total : 5423.72 21.19 0.00 0.00 23441.65 6062.08 28398.93 00:25:00.312 { 00:25:00.312 "results": [ 00:25:00.312 { 00:25:00.312 "job": "nvme0n1", 00:25:00.312 "core_mask": "0x2", 00:25:00.312 "workload": "verify", 00:25:00.312 "status": "finished", 00:25:00.312 "verify_range": { 00:25:00.312 "start": 0, 00:25:00.312 "length": 8192 00:25:00.312 }, 00:25:00.312 "queue_depth": 128, 00:25:00.312 "io_size": 4096, 00:25:00.312 "runtime": 1.017199, 00:25:00.312 "iops": 5423.717483009716, 00:25:00.312 "mibps": 21.1863964180067, 00:25:00.312 "io_failed": 0, 00:25:00.312 "io_timeout": 0, 00:25:00.313 "avg_latency_us": 23441.64731073651, 00:25:00.313 "min_latency_us": 6062.08, 00:25:00.313 "max_latency_us": 28398.933333333334 00:25:00.313 } 00:25:00.313 ], 00:25:00.313 "core_count": 1 00:25:00.313 } 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2738956 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2738956 ']' 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2738956 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2738956 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2738956' 00:25:00.313 killing process with pid 2738956 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2738956 00:25:00.313 Received shutdown signal, test time was about 1.000000 seconds 00:25:00.313 00:25:00.313 Latency(us) 00:25:00.313 [2024-11-20T05:36:20.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.313 [2024-11-20T05:36:20.233Z] =================================================================================================================== 00:25:00.313 [2024-11-20T05:36:20.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.313 06:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2738956 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2738589 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2738589 ']' 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2738589 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2738589 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2738589' 00:25:00.313 killing process with pid 2738589 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2738589 00:25:00.313 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2738589 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2739306 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2739306 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2739306 ']' 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:00.574 06:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.574 [2024-11-20 06:36:20.296871] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:00.574 [2024-11-20 06:36:20.296932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.574 [2024-11-20 06:36:20.397818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.574 [2024-11-20 06:36:20.447056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.574 [2024-11-20 06:36:20.447114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.574 [2024-11-20 06:36:20.447123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.574 [2024-11-20 06:36:20.447131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.574 [2024-11-20 06:36:20.447137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.574 [2024-11-20 06:36:20.447953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.516 [2024-11-20 06:36:21.166474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.516 malloc0 00:25:01.516 [2024-11-20 06:36:21.196648] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.516 [2024-11-20 06:36:21.196977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2739654 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2739654 /var/tmp/bdevperf.sock 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2739654 ']' 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:01.516 06:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.516 [2024-11-20 06:36:21.280093] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:01.516 [2024-11-20 06:36:21.280157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739654 ] 00:25:01.516 [2024-11-20 06:36:21.367525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.516 [2024-11-20 06:36:21.401528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.456 06:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:02.456 06:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:02.456 06:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlsF664L0z 00:25:02.456 06:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:02.717 [2024-11-20 06:36:22.392398] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.717 nvme0n1 00:25:02.717 06:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:02.717 Running I/O for 1 seconds... 00:25:03.919 5750.00 IOPS, 22.46 MiB/s 00:25:03.919 Latency(us) 00:25:03.919 [2024-11-20T05:36:23.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.919 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:03.919 Verification LBA range: start 0x0 length 0x2000 00:25:03.919 nvme0n1 : 1.01 5797.34 22.65 0.00 0.00 21928.35 4423.68 46967.47 00:25:03.919 [2024-11-20T05:36:23.839Z] =================================================================================================================== 00:25:03.919 [2024-11-20T05:36:23.839Z] Total : 5797.34 22.65 0.00 0.00 21928.35 4423.68 46967.47 00:25:03.919 { 00:25:03.919 "results": [ 00:25:03.919 { 00:25:03.919 "job": "nvme0n1", 00:25:03.919 "core_mask": "0x2", 00:25:03.919 "workload": "verify", 00:25:03.919 "status": "finished", 00:25:03.919 "verify_range": { 00:25:03.919 "start": 0, 00:25:03.919 "length": 8192 00:25:03.919 }, 00:25:03.919 "queue_depth": 128, 00:25:03.919 "io_size": 4096, 00:25:03.919 "runtime": 1.013914, 00:25:03.919 "iops": 5797.335868722594, 00:25:03.919 "mibps": 22.64584323719763, 00:25:03.919 "io_failed": 0, 00:25:03.919 "io_timeout": 0, 00:25:03.919 "avg_latency_us": 21928.345042531473, 00:25:03.919 "min_latency_us": 4423.68, 00:25:03.919 "max_latency_us": 46967.46666666667 00:25:03.919 } 00:25:03.919 ], 00:25:03.919 "core_count": 1 00:25:03.919 } 00:25:03.919 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:03.919 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.919 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:03.919 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.919 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:03.919 "subsystems": [ 00:25:03.919 { 00:25:03.919 "subsystem": "keyring", 00:25:03.919 "config": [ 00:25:03.919 { 00:25:03.919 "method": "keyring_file_add_key", 00:25:03.919 "params": { 00:25:03.919 "name": "key0", 00:25:03.919 "path": "/tmp/tmp.OlsF664L0z" 00:25:03.919 } 00:25:03.919 } 00:25:03.919 ] 00:25:03.919 }, 00:25:03.919 { 00:25:03.919 "subsystem": "iobuf", 00:25:03.919 "config": [ 00:25:03.919 { 00:25:03.919 "method": "iobuf_set_options", 00:25:03.919 "params": { 00:25:03.919 "small_pool_count": 8192, 00:25:03.919 "large_pool_count": 1024, 00:25:03.919 "small_bufsize": 8192, 00:25:03.919 "large_bufsize": 135168, 00:25:03.919 "enable_numa": false 00:25:03.919 } 00:25:03.919 } 00:25:03.919 ] 00:25:03.919 }, 00:25:03.919 { 00:25:03.919 "subsystem": "sock", 00:25:03.919 "config": [ 00:25:03.919 { 00:25:03.919 "method": "sock_set_default_impl", 00:25:03.919 "params": { 00:25:03.919 "impl_name": "posix" 00:25:03.919 } 00:25:03.919 }, 00:25:03.919 { 00:25:03.919 "method": "sock_impl_set_options", 00:25:03.919 "params": { 00:25:03.919 "impl_name": "ssl", 00:25:03.919 "recv_buf_size": 4096, 00:25:03.919 "send_buf_size": 4096, 00:25:03.919 "enable_recv_pipe": true, 00:25:03.919 "enable_quickack": false, 00:25:03.919 "enable_placement_id": 0, 00:25:03.919 "enable_zerocopy_send_server": true, 00:25:03.919 "enable_zerocopy_send_client": false, 00:25:03.919 "zerocopy_threshold": 0, 00:25:03.919 "tls_version": 0, 00:25:03.919 "enable_ktls": false 00:25:03.919 } 00:25:03.919 }, 00:25:03.919 { 00:25:03.919 "method": "sock_impl_set_options", 00:25:03.919 "params": { 00:25:03.919 "impl_name": "posix", 00:25:03.919 "recv_buf_size": 2097152, 00:25:03.919 "send_buf_size": 2097152, 00:25:03.919 "enable_recv_pipe": true, 00:25:03.919 "enable_quickack": false, 00:25:03.919 "enable_placement_id": 0, 00:25:03.919 "enable_zerocopy_send_server": true, 00:25:03.919 "enable_zerocopy_send_client": false, 00:25:03.919 "zerocopy_threshold": 0, 00:25:03.919 "tls_version": 0, 00:25:03.919 "enable_ktls": false 00:25:03.919 } 00:25:03.919 } 00:25:03.919 ] 00:25:03.919 }, 00:25:03.920 { 00:25:03.920 "subsystem": "vmd", 00:25:03.920 "config": [] 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "subsystem": "accel", 00:25:03.920 "config": [ 00:25:03.920 { 00:25:03.920 "method": "accel_set_options", 00:25:03.920 "params": { 00:25:03.920 "small_cache_size": 128, 00:25:03.920 "large_cache_size": 16, 00:25:03.920 "task_count": 2048, 00:25:03.920 "sequence_count": 2048, 00:25:03.920 "buf_count": 2048 00:25:03.920 } 00:25:03.920 } 00:25:03.920 ] 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "subsystem": "bdev", 00:25:03.920 "config": [ 00:25:03.920 { 00:25:03.920 "method": "bdev_set_options", 00:25:03.920 "params": { 00:25:03.920 "bdev_io_pool_size": 65535, 00:25:03.920 "bdev_io_cache_size": 256, 00:25:03.920 "bdev_auto_examine": true, 00:25:03.920 "iobuf_small_cache_size": 128, 00:25:03.920 "iobuf_large_cache_size": 16 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "bdev_raid_set_options", 00:25:03.920 "params": { 00:25:03.920 "process_window_size_kb": 1024, 00:25:03.920 "process_max_bandwidth_mb_sec": 0 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "bdev_iscsi_set_options", 00:25:03.920 "params": { 00:25:03.920 "timeout_sec": 30 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "bdev_nvme_set_options", 00:25:03.920 "params": { 00:25:03.920 "action_on_timeout": "none", 00:25:03.920 "timeout_us": 0, 00:25:03.920 "timeout_admin_us": 0, 00:25:03.920 "keep_alive_timeout_ms": 10000, 00:25:03.920 "arbitration_burst": 0, 00:25:03.920 "low_priority_weight": 0, 00:25:03.920 "medium_priority_weight": 0, 00:25:03.920 "high_priority_weight": 0, 00:25:03.920 "nvme_adminq_poll_period_us": 10000, 00:25:03.920 "nvme_ioq_poll_period_us": 0, 00:25:03.920 "io_queue_requests": 0, 00:25:03.920 "delay_cmd_submit": true, 00:25:03.920 "transport_retry_count": 4, 00:25:03.920 "bdev_retry_count": 3, 00:25:03.920 "transport_ack_timeout": 0, 00:25:03.920 "ctrlr_loss_timeout_sec": 0, 00:25:03.920 "reconnect_delay_sec": 0, 00:25:03.920 "fast_io_fail_timeout_sec": 0, 00:25:03.920 "disable_auto_failback": false, 00:25:03.920 "generate_uuids": false, 00:25:03.920 "transport_tos": 0, 00:25:03.920 "nvme_error_stat": false, 00:25:03.920 "rdma_srq_size": 0, 00:25:03.920 "io_path_stat": false, 00:25:03.920 "allow_accel_sequence": false, 00:25:03.920 "rdma_max_cq_size": 0, 00:25:03.920 "rdma_cm_event_timeout_ms": 0, 00:25:03.920 "dhchap_digests": [ 00:25:03.920 "sha256", 00:25:03.920 "sha384", 00:25:03.920 "sha512" 00:25:03.920 ], 00:25:03.920 "dhchap_dhgroups": [ 00:25:03.920 "null", 00:25:03.920 "ffdhe2048", 00:25:03.920 "ffdhe3072", 00:25:03.920 "ffdhe4096", 00:25:03.920 "ffdhe6144", 00:25:03.920 "ffdhe8192" 00:25:03.920 ] 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "bdev_nvme_set_hotplug", 00:25:03.920 "params": { 00:25:03.920 "period_us": 100000, 00:25:03.920 "enable": false 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "bdev_malloc_create", 00:25:03.920 "params": { 00:25:03.920 "name": "malloc0", 00:25:03.920 "num_blocks": 8192, 00:25:03.920 "block_size": 4096, 00:25:03.920 "physical_block_size": 4096, 00:25:03.920 "uuid": "00512f5c-8d0d-4262-b203-38f304ce4d20", 00:25:03.920 "optimal_io_boundary": 0, 00:25:03.920 "md_size": 0, 00:25:03.920 "dif_type": 0, 00:25:03.920 "dif_is_head_of_md": false, 00:25:03.920 "dif_pi_format": 0 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "bdev_wait_for_examine" 00:25:03.920 } 00:25:03.920 ] 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "subsystem": "nbd", 00:25:03.920 "config": [] 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "subsystem": "scheduler", 00:25:03.920 "config": [ 00:25:03.920 { 00:25:03.920 "method": "framework_set_scheduler", 00:25:03.920 "params": { 00:25:03.920 "name": "static" 00:25:03.920 } 00:25:03.920 } 00:25:03.920 ] 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "subsystem": "nvmf", 00:25:03.920 "config": [ 00:25:03.920 { 00:25:03.920 "method": "nvmf_set_config", 00:25:03.920 "params": { 00:25:03.920 "discovery_filter": "match_any", 00:25:03.920 "admin_cmd_passthru": { 00:25:03.920 "identify_ctrlr": false 00:25:03.920 }, 00:25:03.920 "dhchap_digests": [ 00:25:03.920 "sha256", 00:25:03.920 "sha384", 00:25:03.920 "sha512" 00:25:03.920 ], 00:25:03.920 "dhchap_dhgroups": [ 00:25:03.920 "null", 00:25:03.920 "ffdhe2048", 00:25:03.920 "ffdhe3072", 00:25:03.920 "ffdhe4096", 00:25:03.920 "ffdhe6144", 00:25:03.920 "ffdhe8192" 00:25:03.920 ] 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "nvmf_set_max_subsystems", 00:25:03.920 "params": { 00:25:03.920 "max_subsystems": 1024 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "nvmf_set_crdt", 00:25:03.920 "params": { 00:25:03.920 "crdt1": 0, 00:25:03.920 "crdt2": 0, 00:25:03.920 "crdt3": 0 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "nvmf_create_transport", 00:25:03.920 "params": { 00:25:03.920 "trtype": "TCP", 00:25:03.920 "max_queue_depth": 128, 00:25:03.920 "max_io_qpairs_per_ctrlr": 127, 00:25:03.920 "in_capsule_data_size": 4096, 00:25:03.920 "max_io_size": 131072, 00:25:03.920 "io_unit_size": 131072, 00:25:03.920 "max_aq_depth": 128, 00:25:03.920 "num_shared_buffers": 511, 00:25:03.920 "buf_cache_size": 4294967295, 00:25:03.920 "dif_insert_or_strip": false, 00:25:03.920 "zcopy": false, 00:25:03.920 "c2h_success": false, 00:25:03.920 "sock_priority": 0, 00:25:03.920 "abort_timeout_sec": 1, 00:25:03.920 "ack_timeout": 0, 00:25:03.920 "data_wr_pool_size": 0 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "nvmf_create_subsystem", 00:25:03.920 "params": { 00:25:03.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.920 "allow_any_host": false, 00:25:03.920 "serial_number": "00000000000000000000", 00:25:03.920 "model_number": "SPDK bdev Controller", 00:25:03.920 "max_namespaces": 32, 00:25:03.920 "min_cntlid": 1, 00:25:03.920 "max_cntlid": 65519, 00:25:03.920 "ana_reporting": false 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "nvmf_subsystem_add_host", 00:25:03.920 "params": { 00:25:03.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.920 "host": "nqn.2016-06.io.spdk:host1", 00:25:03.920 "psk": "key0" 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "nvmf_subsystem_add_ns", 00:25:03.920 "params": { 00:25:03.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.920 "namespace": { 00:25:03.920 "nsid": 1, 00:25:03.920 "bdev_name": "malloc0", 00:25:03.920 "nguid": "00512F5C8D0D4262B20338F304CE4D20", 00:25:03.920 "uuid": "00512f5c-8d0d-4262-b203-38f304ce4d20", 00:25:03.920 "no_auto_visible": false 00:25:03.920 } 00:25:03.920 } 00:25:03.920 }, 00:25:03.920 { 00:25:03.920 "method": "nvmf_subsystem_add_listener", 00:25:03.920 "params": { 00:25:03.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.920 "listen_address": { 00:25:03.920 "trtype": "TCP", 00:25:03.920 "adrfam": "IPv4", 00:25:03.920 "traddr": "10.0.0.2", 00:25:03.920 "trsvcid": "4420" 00:25:03.920 }, 00:25:03.920 "secure_channel": false, 00:25:03.920 "sock_impl": "ssl" 00:25:03.920 } 00:25:03.920 } 00:25:03.920 ] 00:25:03.920 } 00:25:03.920 ] 00:25:03.920 }' 00:25:03.920 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:04.182 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:04.182 "subsystems": [ 00:25:04.182 { 00:25:04.182 "subsystem": "keyring", 00:25:04.182 "config": [ 00:25:04.182 { 00:25:04.182 "method": "keyring_file_add_key", 00:25:04.182 "params": { 00:25:04.182 "name": "key0", 00:25:04.182 "path": "/tmp/tmp.OlsF664L0z" 00:25:04.182 } 00:25:04.182 } 00:25:04.182 ] 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "subsystem": "iobuf", 00:25:04.182 "config": [ 00:25:04.182 { 00:25:04.182 "method": "iobuf_set_options", 00:25:04.182 "params": { 00:25:04.182 "small_pool_count": 8192, 00:25:04.182 "large_pool_count": 1024, 00:25:04.182 "small_bufsize": 8192, 00:25:04.182 "large_bufsize": 135168, 00:25:04.182 "enable_numa": false 00:25:04.182 } 00:25:04.182 } 00:25:04.182 ] 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "subsystem": "sock", 00:25:04.182 "config": [ 00:25:04.182 { 00:25:04.182 "method": "sock_set_default_impl", 00:25:04.182 "params": { 00:25:04.182 "impl_name": "posix" 00:25:04.182 } 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "method": "sock_impl_set_options", 00:25:04.182 "params": { 00:25:04.182 "impl_name": "ssl", 00:25:04.182 "recv_buf_size": 4096, 00:25:04.182 "send_buf_size": 4096, 00:25:04.182 "enable_recv_pipe": true, 00:25:04.182 "enable_quickack": false, 00:25:04.182 "enable_placement_id": 0, 00:25:04.182 "enable_zerocopy_send_server": true, 00:25:04.182 "enable_zerocopy_send_client": false, 00:25:04.182 "zerocopy_threshold": 0, 00:25:04.182 "tls_version": 0, 00:25:04.182 "enable_ktls": false 00:25:04.182 } 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "method": "sock_impl_set_options", 00:25:04.182 "params": { 00:25:04.182 "impl_name": "posix", 00:25:04.182 "recv_buf_size": 2097152, 00:25:04.182 "send_buf_size": 2097152, 00:25:04.182 "enable_recv_pipe": true, 00:25:04.182 "enable_quickack": false, 00:25:04.182 "enable_placement_id": 0, 00:25:04.182 "enable_zerocopy_send_server": true, 00:25:04.182 "enable_zerocopy_send_client": false, 00:25:04.182 "zerocopy_threshold": 0, 00:25:04.182 "tls_version": 0, 00:25:04.182 "enable_ktls": false 00:25:04.182 } 00:25:04.182 } 00:25:04.182 ] 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "subsystem": "vmd", 00:25:04.182 "config": [] 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "subsystem": "accel", 00:25:04.182 "config": [ 00:25:04.182 { 00:25:04.182 "method": "accel_set_options", 00:25:04.182 "params": { 00:25:04.182 "small_cache_size": 128, 00:25:04.182 "large_cache_size": 16, 00:25:04.182 "task_count": 2048, 00:25:04.182 "sequence_count": 2048, 00:25:04.182 "buf_count": 2048 00:25:04.182 } 00:25:04.182 } 00:25:04.182 ] 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "subsystem": "bdev", 00:25:04.182 "config": [ 00:25:04.182 { 00:25:04.182 "method": "bdev_set_options", 00:25:04.182 "params": { 00:25:04.182 "bdev_io_pool_size": 65535, 00:25:04.182 "bdev_io_cache_size": 256, 00:25:04.182 "bdev_auto_examine": true, 00:25:04.182 "iobuf_small_cache_size": 128, 00:25:04.182 "iobuf_large_cache_size": 16 00:25:04.182 } 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "method": "bdev_raid_set_options", 00:25:04.182 "params": { 00:25:04.182 "process_window_size_kb": 1024, 00:25:04.182 "process_max_bandwidth_mb_sec": 0 00:25:04.182 } 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "method": "bdev_iscsi_set_options", 00:25:04.182 "params": { 00:25:04.182 "timeout_sec": 30 00:25:04.182 } 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "method": "bdev_nvme_set_options", 00:25:04.182 "params": { 00:25:04.182 "action_on_timeout": "none", 00:25:04.182 "timeout_us": 0, 00:25:04.182 "timeout_admin_us": 0, 00:25:04.182 "keep_alive_timeout_ms": 10000, 00:25:04.182 "arbitration_burst": 0, 00:25:04.182 "low_priority_weight": 0, 00:25:04.182 "medium_priority_weight": 0, 00:25:04.182 "high_priority_weight": 0, 00:25:04.182 "nvme_adminq_poll_period_us": 10000, 00:25:04.182 "nvme_ioq_poll_period_us": 0, 00:25:04.182 "io_queue_requests": 512, 00:25:04.182 "delay_cmd_submit": true, 00:25:04.182 "transport_retry_count": 4, 00:25:04.182 "bdev_retry_count": 3, 00:25:04.182 "transport_ack_timeout": 0, 00:25:04.182 "ctrlr_loss_timeout_sec": 0, 00:25:04.182 "reconnect_delay_sec": 0, 00:25:04.182 "fast_io_fail_timeout_sec": 0, 00:25:04.182 "disable_auto_failback": false, 00:25:04.182 "generate_uuids": false, 00:25:04.182 "transport_tos": 0, 00:25:04.182 "nvme_error_stat": false, 00:25:04.182 "rdma_srq_size": 0, 00:25:04.182 "io_path_stat": false, 00:25:04.182 "allow_accel_sequence": false, 00:25:04.182 "rdma_max_cq_size": 0, 00:25:04.182 "rdma_cm_event_timeout_ms": 0, 00:25:04.182 "dhchap_digests": [ 00:25:04.182 "sha256", 00:25:04.182 "sha384", 00:25:04.182 "sha512" 00:25:04.182 ], 00:25:04.182 "dhchap_dhgroups": [ 00:25:04.182 "null", 00:25:04.182 "ffdhe2048", 00:25:04.182 "ffdhe3072", 00:25:04.182 "ffdhe4096", 00:25:04.182 "ffdhe6144", 00:25:04.182 "ffdhe8192" 00:25:04.182 ] 00:25:04.182 } 00:25:04.182 }, 00:25:04.182 { 00:25:04.182 "method": "bdev_nvme_attach_controller", 00:25:04.182 "params": { 00:25:04.182 "name": "nvme0", 00:25:04.182 "trtype": "TCP", 00:25:04.182 "adrfam": "IPv4", 00:25:04.182 "traddr": "10.0.0.2", 00:25:04.182 "trsvcid": "4420", 00:25:04.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.182 "prchk_reftag": false, 00:25:04.182 "prchk_guard": false, 00:25:04.182 "ctrlr_loss_timeout_sec": 0, 00:25:04.182 "reconnect_delay_sec": 0, 00:25:04.182 "fast_io_fail_timeout_sec": 0, 00:25:04.182 "psk": "key0", 00:25:04.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.182 "hdgst": false, 00:25:04.182 "ddgst": false, 00:25:04.182 "multipath": "multipath" 00:25:04.182 } 00:25:04.182 }, 00:25:04.182 { 00:25:04.183 "method": "bdev_nvme_set_hotplug", 00:25:04.183 "params": { 00:25:04.183 "period_us": 100000, 00:25:04.183 "enable": false 00:25:04.183 } 00:25:04.183 }, 00:25:04.183 { 00:25:04.183 "method": "bdev_enable_histogram", 00:25:04.183 "params": { 00:25:04.183 "name": "nvme0n1", 00:25:04.183 "enable": true 00:25:04.183 } 00:25:04.183 }, 00:25:04.183 { 00:25:04.183 "method": "bdev_wait_for_examine" 00:25:04.183 } 00:25:04.183 ] 00:25:04.183 }, 00:25:04.183 { 00:25:04.183 "subsystem": "nbd", 00:25:04.183 "config": [] 00:25:04.183 } 00:25:04.183 ] 00:25:04.183 }' 00:25:04.183 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2739654 00:25:04.183 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2739654 ']' 00:25:04.183 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2739654 00:25:04.183 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:04.183 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:04.183 06:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2739654 00:25:04.183 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:04.183 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:04.183 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2739654' 00:25:04.183 killing process with pid 2739654 00:25:04.183 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2739654 00:25:04.183 Received shutdown signal, test time was about 1.000000 seconds 00:25:04.183 00:25:04.183 Latency(us) 00:25:04.183 [2024-11-20T05:36:24.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.183 [2024-11-20T05:36:24.103Z] =================================================================================================================== 00:25:04.183 [2024-11-20T05:36:24.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.183 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2739654 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2739306 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2739306 ']' 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2739306 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2739306 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2739306' 00:25:04.443 killing process with pid 2739306 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2739306 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2739306 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:04.443 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:04.444 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.444 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:04.444 "subsystems": [ 00:25:04.444 { 00:25:04.444 "subsystem": "keyring", 00:25:04.444 "config": [ 00:25:04.444 { 00:25:04.444 "method": "keyring_file_add_key", 00:25:04.444 "params": { 00:25:04.444 "name": "key0", 00:25:04.444 "path": "/tmp/tmp.OlsF664L0z" 00:25:04.444 } 00:25:04.444 } 00:25:04.444 ] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "iobuf", 00:25:04.444 "config": [ 00:25:04.444 { 00:25:04.444 "method": "iobuf_set_options", 00:25:04.444 "params": { 00:25:04.444 "small_pool_count": 8192, 00:25:04.444 "large_pool_count": 1024, 00:25:04.444 "small_bufsize": 8192, 00:25:04.444 "large_bufsize": 135168, 00:25:04.444 "enable_numa": false 00:25:04.444 } 00:25:04.444 } 00:25:04.444 ] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "sock", 00:25:04.444 "config": [ 00:25:04.444 { 00:25:04.444 "method": "sock_set_default_impl", 00:25:04.444 "params": { 00:25:04.444 "impl_name": "posix" 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "sock_impl_set_options", 00:25:04.444 "params": { 00:25:04.444 "impl_name": "ssl", 00:25:04.444 "recv_buf_size": 4096, 00:25:04.444 "send_buf_size": 4096, 00:25:04.444 "enable_recv_pipe": true, 00:25:04.444 "enable_quickack": false, 00:25:04.444 "enable_placement_id": 0, 00:25:04.444 "enable_zerocopy_send_server": true, 00:25:04.444 "enable_zerocopy_send_client": false, 00:25:04.444 "zerocopy_threshold": 0, 00:25:04.444 "tls_version": 0, 00:25:04.444 "enable_ktls": false 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "sock_impl_set_options", 00:25:04.444 "params": { 00:25:04.444 "impl_name": "posix", 00:25:04.444 "recv_buf_size": 2097152, 00:25:04.444 "send_buf_size": 2097152, 00:25:04.444 "enable_recv_pipe": true, 00:25:04.444 "enable_quickack": false, 00:25:04.444 "enable_placement_id": 0, 00:25:04.444 "enable_zerocopy_send_server": true, 00:25:04.444 "enable_zerocopy_send_client": false, 00:25:04.444 "zerocopy_threshold": 0, 00:25:04.444 "tls_version": 0, 00:25:04.444 "enable_ktls": false 00:25:04.444 } 00:25:04.444 } 00:25:04.444 ] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "vmd", 00:25:04.444 "config": [] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "accel", 00:25:04.444 "config": [ 00:25:04.444 { 00:25:04.444 "method": "accel_set_options", 00:25:04.444 "params": { 00:25:04.444 "small_cache_size": 128, 00:25:04.444 "large_cache_size": 16, 00:25:04.444 "task_count": 2048, 00:25:04.444 "sequence_count": 2048, 00:25:04.444 "buf_count": 2048 00:25:04.444 } 00:25:04.444 } 00:25:04.444 ] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "bdev", 00:25:04.444 "config": [ 00:25:04.444 { 00:25:04.444 "method": "bdev_set_options", 00:25:04.444 "params": { 00:25:04.444 "bdev_io_pool_size": 65535, 00:25:04.444 "bdev_io_cache_size": 256, 00:25:04.444 "bdev_auto_examine": true, 00:25:04.444 "iobuf_small_cache_size": 128, 00:25:04.444 "iobuf_large_cache_size": 16 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "bdev_raid_set_options", 00:25:04.444 "params": { 00:25:04.444 "process_window_size_kb": 1024, 00:25:04.444 "process_max_bandwidth_mb_sec": 0 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "bdev_iscsi_set_options", 00:25:04.444 "params": { 00:25:04.444 "timeout_sec": 30 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "bdev_nvme_set_options", 00:25:04.444 "params": { 00:25:04.444 "action_on_timeout": "none", 00:25:04.444 "timeout_us": 0, 00:25:04.444 "timeout_admin_us": 0, 00:25:04.444 "keep_alive_timeout_ms": 10000, 00:25:04.444 "arbitration_burst": 0, 00:25:04.444 "low_priority_weight": 0, 00:25:04.444 "medium_priority_weight": 0, 00:25:04.444 "high_priority_weight": 0, 00:25:04.444 "nvme_adminq_poll_period_us": 10000, 00:25:04.444 "nvme_ioq_poll_period_us": 0, 00:25:04.444 "io_queue_requests": 0, 00:25:04.444 "delay_cmd_submit": true, 00:25:04.444 "transport_retry_count": 4, 00:25:04.444 "bdev_retry_count": 3, 00:25:04.444 "transport_ack_timeout": 0, 00:25:04.444 "ctrlr_loss_timeout_sec": 0, 00:25:04.444 "reconnect_delay_sec": 0, 00:25:04.444 "fast_io_fail_timeout_sec": 0, 00:25:04.444 "disable_auto_failback": false, 00:25:04.444 "generate_uuids": false, 00:25:04.444 "transport_tos": 0, 00:25:04.444 "nvme_error_stat": false, 00:25:04.444 "rdma_srq_size": 0, 00:25:04.444 "io_path_stat": false, 00:25:04.444 "allow_accel_sequence": false, 00:25:04.444 "rdma_max_cq_size": 0, 00:25:04.444 "rdma_cm_event_timeout_ms": 0, 00:25:04.444 "dhchap_digests": [ 00:25:04.444 "sha256", 00:25:04.444 "sha384", 00:25:04.444 "sha512" 00:25:04.444 ], 00:25:04.444 "dhchap_dhgroups": [ 00:25:04.444 "null", 00:25:04.444 "ffdhe2048", 00:25:04.444 "ffdhe3072", 00:25:04.444 "ffdhe4096", 00:25:04.444 "ffdhe6144", 00:25:04.444 "ffdhe8192" 00:25:04.444 ] 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "bdev_nvme_set_hotplug", 00:25:04.444 "params": { 00:25:04.444 "period_us": 100000, 00:25:04.444 "enable": false 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "bdev_malloc_create", 00:25:04.444 "params": { 00:25:04.444 "name": "malloc0", 00:25:04.444 "num_blocks": 8192, 00:25:04.444 "block_size": 4096, 00:25:04.444 "physical_block_size": 4096, 00:25:04.444 "uuid": "00512f5c-8d0d-4262-b203-38f304ce4d20", 00:25:04.444 "optimal_io_boundary": 0, 00:25:04.444 "md_size": 0, 00:25:04.444 "dif_type": 0, 00:25:04.444 "dif_is_head_of_md": false, 00:25:04.444 "dif_pi_format": 0 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "bdev_wait_for_examine" 00:25:04.444 } 00:25:04.444 ] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "nbd", 00:25:04.444 "config": [] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "scheduler", 00:25:04.444 "config": [ 00:25:04.444 { 00:25:04.444 "method": "framework_set_scheduler", 00:25:04.444 "params": { 00:25:04.444 "name": "static" 00:25:04.444 } 00:25:04.444 } 00:25:04.444 ] 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "subsystem": "nvmf", 00:25:04.444 "config": [ 00:25:04.444 { 00:25:04.444 "method": "nvmf_set_config", 00:25:04.444 "params": { 00:25:04.444 "discovery_filter": "match_any", 00:25:04.444 "admin_cmd_passthru": { 00:25:04.444 "identify_ctrlr": false 00:25:04.444 }, 00:25:04.444 "dhchap_digests": [ 00:25:04.444 "sha256", 00:25:04.444 "sha384", 00:25:04.444 "sha512" 00:25:04.444 ], 00:25:04.444 "dhchap_dhgroups": [ 00:25:04.444 "null", 00:25:04.444 "ffdhe2048", 00:25:04.444 "ffdhe3072", 00:25:04.444 "ffdhe4096", 00:25:04.444 "ffdhe6144", 00:25:04.444 "ffdhe8192" 00:25:04.444 ] 00:25:04.444 } 00:25:04.444 }, 00:25:04.444 { 00:25:04.444 "method": "nvmf_set_max_subsystems", 00:25:04.444 "params": { 00:25:04.444 "max_subsystems": 1024 00:25:04.444 } 00:25:04.444 }, 00:25:04.445 { 00:25:04.445 "method": "nvmf_set_crdt", 00:25:04.445 "params": { 00:25:04.445 "crdt1": 0, 00:25:04.445 "crdt2": 0, 00:25:04.445 "crdt3": 0 00:25:04.445 } 00:25:04.445 }, 00:25:04.445 { 00:25:04.445 "method": "nvmf_create_transport", 00:25:04.445 "params": { 00:25:04.445 "trtype": "TCP", 00:25:04.445 "max_queue_depth": 128, 00:25:04.445 "max_io_qpairs_per_ctrlr": 127, 00:25:04.445 "in_capsule_data_size": 4096, 00:25:04.445 "max_io_size": 131072, 00:25:04.445 "io_unit_size": 131072, 00:25:04.445 "max_aq_depth": 128, 00:25:04.445 "num_shared_buffers": 511, 00:25:04.445 "buf_cache_size": 4294967295, 00:25:04.445 "dif_insert_or_strip": false, 00:25:04.445 "zcopy": false, 00:25:04.445 "c2h_success": false, 00:25:04.445 "sock_priority": 0, 00:25:04.445 "abort_timeout_sec": 1, 00:25:04.445 "ack_timeout": 0, 00:25:04.445 "data_wr_pool_size": 0 00:25:04.445 } 00:25:04.445 }, 00:25:04.445 { 00:25:04.445 "method": "nvmf_create_subsystem", 00:25:04.445 "params": { 00:25:04.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.445 "allow_any_host": false, 00:25:04.445 "serial_number": "00000000000000000000", 00:25:04.445 "model_number": "SPDK bdev Controller", 00:25:04.445 "max_namespaces": 32, 00:25:04.445 "min_cntlid": 1, 00:25:04.445 "max_cntlid": 65519, 00:25:04.445 "ana_reporting": false 00:25:04.445 } 00:25:04.445 }, 00:25:04.445 { 00:25:04.445 "method": "nvmf_subsystem_add_host", 00:25:04.445 "params": { 00:25:04.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.445 "host": "nqn.2016-06.io.spdk:host1", 00:25:04.445 "psk": "key0" 00:25:04.445 } 00:25:04.445 }, 00:25:04.445 { 00:25:04.445 "method": "nvmf_subsystem_add_ns", 00:25:04.445 "params": { 00:25:04.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.445 "namespace": { 00:25:04.445 "nsid": 1, 00:25:04.445 "bdev_name": "malloc0", 00:25:04.445 "nguid": "00512F5C8D0D4262B20338F304CE4D20", 00:25:04.445 "uuid": "00512f5c-8d0d-4262-b203-38f304ce4d20", 00:25:04.445 "no_auto_visible": false 00:25:04.445 } 00:25:04.445 } 00:25:04.445 }, 00:25:04.445 { 00:25:04.445 "method": "nvmf_subsystem_add_listener", 00:25:04.445 "params": { 00:25:04.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.445 "listen_address": { 00:25:04.445 "trtype": "TCP", 00:25:04.445 "adrfam": "IPv4", 00:25:04.445 "traddr": "10.0.0.2", 00:25:04.445 "trsvcid": "4420" 00:25:04.445 }, 00:25:04.445 "secure_channel": false, 00:25:04.445 "sock_impl": "ssl" 00:25:04.445 } 00:25:04.445 } 00:25:04.445 ] 00:25:04.445 } 00:25:04.445 ] 00:25:04.445 }' 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2740249 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2740249 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2740249 ']' 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:04.445 06:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.706 [2024-11-20 06:36:24.376906] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:04.706 [2024-11-20 06:36:24.376961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.706 [2024-11-20 06:36:24.467477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.706 [2024-11-20 06:36:24.496680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.706 [2024-11-20 06:36:24.496709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.706 [2024-11-20 06:36:24.496715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.706 [2024-11-20 06:36:24.496720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.706 [2024-11-20 06:36:24.496724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.706 [2024-11-20 06:36:24.497228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.966 [2024-11-20 06:36:24.691946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.966 [2024-11-20 06:36:24.723980] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:04.966 [2024-11-20 06:36:24.724170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2740372 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2740372 /var/tmp/bdevperf.sock 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2740372 ']' 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.536 06:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:05.536 "subsystems": [ 00:25:05.536 { 00:25:05.536 "subsystem": "keyring", 00:25:05.536 "config": [ 00:25:05.536 { 00:25:05.536 "method": "keyring_file_add_key", 00:25:05.536 "params": { 00:25:05.536 "name": "key0", 00:25:05.536 "path": "/tmp/tmp.OlsF664L0z" 00:25:05.536 } 00:25:05.536 } 00:25:05.536 ] 00:25:05.536 }, 00:25:05.536 { 00:25:05.536 "subsystem": "iobuf", 00:25:05.536 "config": [ 00:25:05.536 { 00:25:05.536 "method": "iobuf_set_options", 00:25:05.536 "params": { 00:25:05.536 "small_pool_count": 8192, 00:25:05.536 "large_pool_count": 1024, 00:25:05.536 "small_bufsize": 8192, 00:25:05.536 "large_bufsize": 135168, 00:25:05.536 "enable_numa": false 00:25:05.536 } 00:25:05.536 } 00:25:05.536 ] 00:25:05.536 }, 00:25:05.536 { 00:25:05.536 "subsystem": "sock", 00:25:05.536 "config": [ 00:25:05.536 { 00:25:05.536 "method": "sock_set_default_impl", 00:25:05.536 "params": { 00:25:05.536 "impl_name": "posix" 00:25:05.536 } 00:25:05.536 }, 00:25:05.536 { 00:25:05.536 "method": "sock_impl_set_options", 00:25:05.536 "params": { 00:25:05.536 "impl_name": "ssl", 00:25:05.536 "recv_buf_size": 4096, 00:25:05.536 "send_buf_size": 4096, 00:25:05.536 "enable_recv_pipe": true, 00:25:05.536 "enable_quickack": false, 00:25:05.537 "enable_placement_id": 0, 00:25:05.537 "enable_zerocopy_send_server": true, 00:25:05.537 "enable_zerocopy_send_client": false, 00:25:05.537 "zerocopy_threshold": 0, 00:25:05.537 "tls_version": 0, 00:25:05.537 "enable_ktls": false 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "sock_impl_set_options", 00:25:05.537 "params": { 00:25:05.537 "impl_name": "posix", 00:25:05.537 "recv_buf_size": 2097152, 00:25:05.537 "send_buf_size": 2097152, 00:25:05.537 "enable_recv_pipe": true, 00:25:05.537 "enable_quickack": false, 00:25:05.537 "enable_placement_id": 0, 00:25:05.537 "enable_zerocopy_send_server": true, 00:25:05.537 "enable_zerocopy_send_client": false, 00:25:05.537 "zerocopy_threshold": 0, 00:25:05.537 "tls_version": 0, 00:25:05.537 "enable_ktls": false 00:25:05.537 } 00:25:05.537 } 00:25:05.537 ] 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "subsystem": "vmd", 00:25:05.537 "config": [] 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "subsystem": "accel", 00:25:05.537 "config": [ 00:25:05.537 { 00:25:05.537 "method": "accel_set_options", 00:25:05.537 "params": { 00:25:05.537 "small_cache_size": 128, 00:25:05.537 "large_cache_size": 16, 00:25:05.537 "task_count": 2048, 00:25:05.537 "sequence_count": 2048, 00:25:05.537 "buf_count": 2048 00:25:05.537 } 00:25:05.537 } 00:25:05.537 ] 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "subsystem": "bdev", 00:25:05.537 "config": [ 00:25:05.537 { 00:25:05.537 "method": "bdev_set_options", 00:25:05.537 "params": { 00:25:05.537 "bdev_io_pool_size": 65535, 00:25:05.537 "bdev_io_cache_size": 256, 00:25:05.537 "bdev_auto_examine": true, 00:25:05.537 "iobuf_small_cache_size": 128, 00:25:05.537 "iobuf_large_cache_size": 16 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "bdev_raid_set_options", 00:25:05.537 "params": { 00:25:05.537 "process_window_size_kb": 1024, 00:25:05.537 "process_max_bandwidth_mb_sec": 0 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "bdev_iscsi_set_options", 00:25:05.537 "params": { 00:25:05.537 "timeout_sec": 30 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "bdev_nvme_set_options", 00:25:05.537 "params": { 00:25:05.537 "action_on_timeout": "none", 00:25:05.537 "timeout_us": 0, 00:25:05.537 "timeout_admin_us": 0, 00:25:05.537 "keep_alive_timeout_ms": 10000, 00:25:05.537 "arbitration_burst": 0, 00:25:05.537 "low_priority_weight": 0, 00:25:05.537 "medium_priority_weight": 0, 00:25:05.537 "high_priority_weight": 0, 00:25:05.537 "nvme_adminq_poll_period_us": 10000, 00:25:05.537 "nvme_ioq_poll_period_us": 0, 00:25:05.537 "io_queue_requests": 512, 00:25:05.537 "delay_cmd_submit": true, 00:25:05.537 "transport_retry_count": 4, 00:25:05.537 "bdev_retry_count": 3, 00:25:05.537 "transport_ack_timeout": 0, 00:25:05.537 "ctrlr_loss_timeout_sec": 0, 00:25:05.537 "reconnect_delay_sec": 0, 00:25:05.537 "fast_io_fail_timeout_sec": 0, 00:25:05.537 "disable_auto_failback": false, 00:25:05.537 "generate_uuids": false, 00:25:05.537 "transport_tos": 0, 00:25:05.537 "nvme_error_stat": false, 00:25:05.537 "rdma_srq_size": 0, 00:25:05.537 "io_path_stat": false, 00:25:05.537 "allow_accel_sequence": false, 00:25:05.537 "rdma_max_cq_size": 0, 00:25:05.537 "rdma_cm_event_timeout_ms": 0, 00:25:05.537 "dhchap_digests": [ 00:25:05.537 "sha256", 00:25:05.537 "sha384", 00:25:05.537 "sha512" 00:25:05.537 ], 00:25:05.537 "dhchap_dhgroups": [ 00:25:05.537 "null", 00:25:05.537 "ffdhe2048", 00:25:05.537 "ffdhe3072", 00:25:05.537 "ffdhe4096", 00:25:05.537 "ffdhe6144", 00:25:05.537 "ffdhe8192" 00:25:05.537 ] 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "bdev_nvme_attach_controller", 00:25:05.537 "params": { 00:25:05.537 "name": "nvme0", 00:25:05.537 "trtype": "TCP", 00:25:05.537 "adrfam": "IPv4", 00:25:05.537 "traddr": "10.0.0.2", 00:25:05.537 "trsvcid": "4420", 00:25:05.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.537 "prchk_reftag": false, 00:25:05.537 "prchk_guard": false, 00:25:05.537 "ctrlr_loss_timeout_sec": 0, 00:25:05.537 "reconnect_delay_sec": 0, 00:25:05.537 "fast_io_fail_timeout_sec": 0, 00:25:05.537 "psk": "key0", 00:25:05.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.537 "hdgst": false, 00:25:05.537 "ddgst": false, 00:25:05.537 "multipath": "multipath" 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "bdev_nvme_set_hotplug", 00:25:05.537 "params": { 00:25:05.537 "period_us": 100000, 00:25:05.537 "enable": false 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "bdev_enable_histogram", 00:25:05.537 "params": { 00:25:05.537 "name": "nvme0n1", 00:25:05.537 "enable": true 00:25:05.537 } 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "method": "bdev_wait_for_examine" 00:25:05.537 } 00:25:05.537 ] 00:25:05.537 }, 00:25:05.537 { 00:25:05.537 "subsystem": "nbd", 00:25:05.537 "config": [] 00:25:05.537 } 00:25:05.537 ] 00:25:05.537 }' 00:25:05.537 [2024-11-20 06:36:25.252561] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:05.537 [2024-11-20 06:36:25.252612] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740372 ] 00:25:05.537 [2024-11-20 06:36:25.336914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.537 [2024-11-20 06:36:25.366517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.798 [2024-11-20 06:36:25.502489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.368 06:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:06.368 06:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:06.368 06:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.368 06:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:06.368 06:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.368 06:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.628 Running I/O for 1 seconds... 00:25:07.687 5545.00 IOPS, 21.66 MiB/s 00:25:07.688 Latency(us) 00:25:07.688 [2024-11-20T05:36:27.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.688 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:07.688 Verification LBA range: start 0x0 length 0x2000 00:25:07.688 nvme0n1 : 1.01 5607.16 21.90 0.00 0.00 22689.91 4724.05 26760.53 00:25:07.688 [2024-11-20T05:36:27.608Z] =================================================================================================================== 00:25:07.688 [2024-11-20T05:36:27.608Z] Total : 5607.16 21.90 0.00 0.00 22689.91 4724.05 26760.53 00:25:07.688 { 00:25:07.688 "results": [ 00:25:07.688 { 00:25:07.688 "job": "nvme0n1", 00:25:07.688 "core_mask": "0x2", 00:25:07.688 "workload": "verify", 00:25:07.688 "status": "finished", 00:25:07.688 "verify_range": { 00:25:07.688 "start": 0, 00:25:07.688 "length": 8192 00:25:07.688 }, 00:25:07.688 "queue_depth": 128, 00:25:07.688 "io_size": 4096, 00:25:07.688 "runtime": 1.011743, 00:25:07.688 "iops": 5607.1551767593155, 00:25:07.688 "mibps": 21.902949909216076, 00:25:07.688 "io_failed": 0, 00:25:07.688 "io_timeout": 0, 00:25:07.688 "avg_latency_us": 22689.90980903696, 00:25:07.688 "min_latency_us": 4724.053333333333, 00:25:07.688 "max_latency_us": 26760.533333333333 00:25:07.688 } 00:25:07.688 ], 00:25:07.688 "core_count": 1 00:25:07.688 } 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:07.688 nvmf_trace.0 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2740372 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2740372 ']' 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2740372 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2740372 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2740372' 00:25:07.688 killing process with pid 2740372 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2740372 00:25:07.688 Received shutdown signal, test time was about 1.000000 seconds 00:25:07.688 00:25:07.688 Latency(us) 00:25:07.688 [2024-11-20T05:36:27.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.688 [2024-11-20T05:36:27.608Z] =================================================================================================================== 00:25:07.688 [2024-11-20T05:36:27.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2740372 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.688 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.954 rmmod nvme_tcp 00:25:07.954 rmmod nvme_fabrics 00:25:07.954 rmmod nvme_keyring 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2740249 ']' 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2740249 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2740249 ']' 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2740249 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2740249 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2740249' 00:25:07.954 killing process with pid 2740249 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2740249 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2740249 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.954 06:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2yC3tb4r6Q /tmp/tmp.CoDLXYms0w /tmp/tmp.OlsF664L0z 00:25:10.500 00:25:10.500 real 1m26.740s 00:25:10.500 user 2m16.861s 00:25:10.500 sys 0m27.008s 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.500 ************************************ 00:25:10.500 END TEST nvmf_tls 00:25:10.500 ************************************ 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:10.500 06:36:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:10.500 ************************************ 00:25:10.500 START TEST nvmf_fips 00:25:10.500 ************************************ 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:10.500 * Looking for test storage... 00:25:10.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:10.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.500 --rc genhtml_branch_coverage=1 00:25:10.500 --rc genhtml_function_coverage=1 00:25:10.500 --rc genhtml_legend=1 00:25:10.500 --rc geninfo_all_blocks=1 00:25:10.500 --rc geninfo_unexecuted_blocks=1 00:25:10.500 00:25:10.500 ' 00:25:10.500 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:10.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.501 --rc genhtml_branch_coverage=1 00:25:10.501 --rc genhtml_function_coverage=1 00:25:10.501 --rc genhtml_legend=1 00:25:10.501 --rc geninfo_all_blocks=1 00:25:10.501 --rc geninfo_unexecuted_blocks=1 00:25:10.501 00:25:10.501 ' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:10.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.501 --rc genhtml_branch_coverage=1 00:25:10.501 --rc genhtml_function_coverage=1 00:25:10.501 --rc genhtml_legend=1 00:25:10.501 --rc geninfo_all_blocks=1 00:25:10.501 --rc geninfo_unexecuted_blocks=1 00:25:10.501 00:25:10.501 ' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:10.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.501 --rc genhtml_branch_coverage=1 00:25:10.501 --rc genhtml_function_coverage=1 00:25:10.501 --rc genhtml_legend=1 00:25:10.501 --rc geninfo_all_blocks=1 00:25:10.501 --rc geninfo_unexecuted_blocks=1 00:25:10.501 00:25:10.501 ' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:10.501 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:25:10.502 Error setting digest 00:25:10.502 40F2B571DD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:10.502 40F2B571DD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.502 06:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:18.643 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:18.643 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:18.643 Found net devices under 0000:31:00.0: cvl_0_0 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.643 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:18.644 Found net devices under 0000:31:00.1: cvl_0_1 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.644 06:36:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:25:18.644 00:25:18.644 --- 10.0.0.2 ping statistics --- 00:25:18.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.644 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:25:18.644 00:25:18.644 --- 10.0.0.1 ping statistics --- 00:25:18.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.644 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2745135 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2745135 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2745135 ']' 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:18.644 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.644 [2024-11-20 06:36:38.208800] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:18.644 [2024-11-20 06:36:38.208874] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.644 [2024-11-20 06:36:38.311608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.644 [2024-11-20 06:36:38.362144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.644 [2024-11-20 06:36:38.362194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.644 [2024-11-20 06:36:38.362204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.644 [2024-11-20 06:36:38.362211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.644 [2024-11-20 06:36:38.362218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.644 [2024-11-20 06:36:38.363074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.216 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:19.216 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:25:19.216 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:19.216 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.216 06:36:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.1N3 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.1N3 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.1N3 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.1N3 00:25:19.216 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:19.482 [2024-11-20 06:36:39.223231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.482 [2024-11-20 06:36:39.239220] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:19.482 [2024-11-20 06:36:39.239493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.482 malloc0 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2745465 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2745465 /var/tmp/bdevperf.sock 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2745465 ']' 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:19.482 06:36:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:19.482 [2024-11-20 06:36:39.391558] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:19.482 [2024-11-20 06:36:39.391642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745465 ] 00:25:19.742 [2024-11-20 06:36:39.486312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.742 [2024-11-20 06:36:39.537209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.313 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:20.313 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:25:20.313 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.1N3 00:25:20.575 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:20.835 [2024-11-20 06:36:40.564736] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:20.835 TLSTESTn1 00:25:20.836 06:36:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.096 Running I/O for 10 seconds... 00:25:22.982 3396.00 IOPS, 13.27 MiB/s [2024-11-20T05:36:43.847Z] 3554.00 IOPS, 13.88 MiB/s [2024-11-20T05:36:44.789Z] 3655.33 IOPS, 14.28 MiB/s [2024-11-20T05:36:46.174Z] 4183.50 IOPS, 16.34 MiB/s [2024-11-20T05:36:47.114Z] 4561.80 IOPS, 17.82 MiB/s [2024-11-20T05:36:48.055Z] 4710.83 IOPS, 18.40 MiB/s [2024-11-20T05:36:48.996Z] 4672.86 IOPS, 18.25 MiB/s [2024-11-20T05:36:49.938Z] 4760.25 IOPS, 18.59 MiB/s [2024-11-20T05:36:50.881Z] 4901.33 IOPS, 19.15 MiB/s [2024-11-20T05:36:50.881Z] 4942.80 IOPS, 19.31 MiB/s 00:25:30.961 Latency(us) 00:25:30.961 [2024-11-20T05:36:50.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.961 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:30.961 Verification LBA range: start 0x0 length 0x2000 00:25:30.961 TLSTESTn1 : 10.01 4948.76 19.33 0.00 0.00 25828.73 5160.96 46312.11 00:25:30.961 [2024-11-20T05:36:50.881Z] =================================================================================================================== 00:25:30.961 [2024-11-20T05:36:50.881Z] Total : 4948.76 19.33 0.00 0.00 25828.73 5160.96 46312.11 00:25:30.961 { 00:25:30.961 "results": [ 00:25:30.961 { 00:25:30.961 "job": "TLSTESTn1", 00:25:30.961 "core_mask": "0x4", 00:25:30.961 "workload": "verify", 00:25:30.961 "status": "finished", 00:25:30.961 "verify_range": { 00:25:30.961 "start": 0, 00:25:30.961 "length": 8192 00:25:30.961 }, 00:25:30.961 "queue_depth": 128, 00:25:30.961 "io_size": 4096, 00:25:30.961 "runtime": 10.013417, 00:25:30.961 "iops": 4948.760248374756, 00:25:30.961 "mibps": 19.33109472021389, 00:25:30.961 "io_failed": 0, 00:25:30.961 "io_timeout": 0, 00:25:30.961 "avg_latency_us": 25828.73424869839, 00:25:30.961 "min_latency_us": 5160.96, 00:25:30.961 "max_latency_us": 46312.10666666667 00:25:30.961 } 00:25:30.961 ], 00:25:30.961 "core_count": 1 00:25:30.961 } 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:25:30.961 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:30.961 nvmf_trace.0 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2745465 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2745465 ']' 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2745465 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2745465 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:31.222 06:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2745465' 00:25:31.222 killing process with pid 2745465 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2745465 00:25:31.222 Received shutdown signal, test time was about 10.000000 seconds 00:25:31.222 00:25:31.222 Latency(us) 00:25:31.222 [2024-11-20T05:36:51.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.222 [2024-11-20T05:36:51.142Z] =================================================================================================================== 00:25:31.222 [2024-11-20T05:36:51.142Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2745465 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.222 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.222 rmmod nvme_tcp 00:25:31.222 rmmod nvme_fabrics 00:25:31.483 rmmod nvme_keyring 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2745135 ']' 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2745135 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2745135 ']' 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2745135 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2745135 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2745135' 00:25:31.483 killing process with pid 2745135 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2745135 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2745135 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.483 06:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.1N3 00:25:34.029 00:25:34.029 real 0m23.416s 00:25:34.029 user 0m24.876s 00:25:34.029 sys 0m9.915s 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:34.029 ************************************ 00:25:34.029 END TEST nvmf_fips 00:25:34.029 ************************************ 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:34.029 ************************************ 00:25:34.029 START TEST nvmf_control_msg_list 00:25:34.029 ************************************ 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:34.029 * Looking for test storage... 00:25:34.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.029 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:34.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.029 --rc genhtml_branch_coverage=1 00:25:34.029 --rc genhtml_function_coverage=1 00:25:34.029 --rc genhtml_legend=1 00:25:34.029 --rc geninfo_all_blocks=1 00:25:34.029 --rc geninfo_unexecuted_blocks=1 00:25:34.029 00:25:34.029 ' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:34.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.030 --rc genhtml_branch_coverage=1 00:25:34.030 --rc genhtml_function_coverage=1 00:25:34.030 --rc genhtml_legend=1 00:25:34.030 --rc geninfo_all_blocks=1 00:25:34.030 --rc geninfo_unexecuted_blocks=1 00:25:34.030 00:25:34.030 ' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:34.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.030 --rc genhtml_branch_coverage=1 00:25:34.030 --rc genhtml_function_coverage=1 00:25:34.030 --rc genhtml_legend=1 00:25:34.030 --rc geninfo_all_blocks=1 00:25:34.030 --rc geninfo_unexecuted_blocks=1 00:25:34.030 00:25:34.030 ' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:34.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.030 --rc genhtml_branch_coverage=1 00:25:34.030 --rc genhtml_function_coverage=1 00:25:34.030 --rc genhtml_legend=1 00:25:34.030 --rc geninfo_all_blocks=1 00:25:34.030 --rc geninfo_unexecuted_blocks=1 00:25:34.030 00:25:34.030 ' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.030 06:36:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:42.168 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:42.168 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:42.168 Found net devices under 0000:31:00.0: cvl_0_0 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:42.168 Found net devices under 0000:31:00.1: cvl_0_1 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:42.168 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:42.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:25:42.169 00:25:42.169 --- 10.0.0.2 ping statistics --- 00:25:42.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.169 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:25:42.169 00:25:42.169 --- 10.0.0.1 ping statistics --- 00:25:42.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.169 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2751910 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2751910 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 2751910 ']' 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.169 06:37:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.169 [2024-11-20 06:37:01.483027] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:42.169 [2024-11-20 06:37:01.483094] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.169 [2024-11-20 06:37:01.586622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.169 [2024-11-20 06:37:01.637982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.169 [2024-11-20 06:37:01.638035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.169 [2024-11-20 06:37:01.638044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.169 [2024-11-20 06:37:01.638051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.169 [2024-11-20 06:37:01.638058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.169 [2024-11-20 06:37:01.638903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.430 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:42.430 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:25:42.430 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.430 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.430 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.691 [2024-11-20 06:37:02.358853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.691 Malloc0 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:42.691 [2024-11-20 06:37:02.413346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2752203 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2752204 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2752205 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2752203 00:25:42.691 06:37:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:42.691 [2024-11-20 06:37:02.503905] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:42.691 [2024-11-20 06:37:02.513927] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:42.691 [2024-11-20 06:37:02.523906] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:44.076 Initializing NVMe Controllers 00:25:44.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:44.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:44.076 Initialization complete. Launching workers. 00:25:44.076 ======================================================== 00:25:44.076 Latency(us) 00:25:44.076 Device Information : IOPS MiB/s Average min max 00:25:44.076 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1457.00 5.69 686.20 281.86 1223.51 00:25:44.076 ======================================================== 00:25:44.076 Total : 1457.00 5.69 686.20 281.86 1223.51 00:25:44.076 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2752204 00:25:44.076 Initializing NVMe Controllers 00:25:44.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:44.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:44.076 Initialization complete. Launching workers. 00:25:44.076 ======================================================== 00:25:44.076 Latency(us) 00:25:44.076 Device Information : IOPS MiB/s Average min max 00:25:44.076 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1412.00 5.52 708.06 309.09 979.84 00:25:44.076 ======================================================== 00:25:44.076 Total : 1412.00 5.52 708.06 309.09 979.84 00:25:44.076 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2752205 00:25:44.076 Initializing NVMe Controllers 00:25:44.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:44.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:44.076 Initialization complete. Launching workers. 00:25:44.076 ======================================================== 00:25:44.076 Latency(us) 00:25:44.076 Device Information : IOPS MiB/s Average min max 00:25:44.076 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40922.40 40650.88 41346.32 00:25:44.076 ======================================================== 00:25:44.076 Total : 25.00 0.10 40922.40 40650.88 41346.32 00:25:44.076 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.076 rmmod nvme_tcp 00:25:44.076 rmmod nvme_fabrics 00:25:44.076 rmmod nvme_keyring 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2751910 ']' 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2751910 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 2751910 ']' 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 2751910 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2751910 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2751910' 00:25:44.076 killing process with pid 2751910 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 2751910 00:25:44.076 06:37:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 2751910 00:25:44.336 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.337 06:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.882 00:25:46.882 real 0m12.682s 00:25:46.882 user 0m8.089s 00:25:46.882 sys 0m6.744s 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:46.882 ************************************ 00:25:46.882 END TEST nvmf_control_msg_list 00:25:46.882 ************************************ 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:46.882 ************************************ 00:25:46.882 START TEST nvmf_wait_for_buf 00:25:46.882 ************************************ 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:46.882 * Looking for test storage... 00:25:46.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:46.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.882 --rc genhtml_branch_coverage=1 00:25:46.882 --rc genhtml_function_coverage=1 00:25:46.882 --rc genhtml_legend=1 00:25:46.882 --rc geninfo_all_blocks=1 00:25:46.882 --rc geninfo_unexecuted_blocks=1 00:25:46.882 00:25:46.882 ' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:46.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.882 --rc genhtml_branch_coverage=1 00:25:46.882 --rc genhtml_function_coverage=1 00:25:46.882 --rc genhtml_legend=1 00:25:46.882 --rc geninfo_all_blocks=1 00:25:46.882 --rc geninfo_unexecuted_blocks=1 00:25:46.882 00:25:46.882 ' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:46.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.882 --rc genhtml_branch_coverage=1 00:25:46.882 --rc genhtml_function_coverage=1 00:25:46.882 --rc genhtml_legend=1 00:25:46.882 --rc geninfo_all_blocks=1 00:25:46.882 --rc geninfo_unexecuted_blocks=1 00:25:46.882 00:25:46.882 ' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:46.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.882 --rc genhtml_branch_coverage=1 00:25:46.882 --rc genhtml_function_coverage=1 00:25:46.882 --rc genhtml_legend=1 00:25:46.882 --rc geninfo_all_blocks=1 00:25:46.882 --rc geninfo_unexecuted_blocks=1 00:25:46.882 00:25:46.882 ' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.882 06:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:55.029 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:55.029 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.029 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:55.030 Found net devices under 0000:31:00.0: cvl_0_0 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:55.030 Found net devices under 0000:31:00.1: cvl_0_1 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.030 06:37:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:25:55.030 00:25:55.030 --- 10.0.0.2 ping statistics --- 00:25:55.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.030 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:25:55.030 00:25:55.030 --- 10.0.0.1 ping statistics --- 00:25:55.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.030 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2756690 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2756690 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 2756690 ']' 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:55.030 06:37:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.030 [2024-11-20 06:37:14.210854] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:55.030 [2024-11-20 06:37:14.210922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.030 [2024-11-20 06:37:14.312207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.030 [2024-11-20 06:37:14.363591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.030 [2024-11-20 06:37:14.363644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.030 [2024-11-20 06:37:14.363653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.030 [2024-11-20 06:37:14.363659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.030 [2024-11-20 06:37:14.363666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.030 [2024-11-20 06:37:14.364479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 Malloc0 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 [2024-11-20 06:37:15.186241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.292 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.554 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.554 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.554 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.554 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:55.554 [2024-11-20 06:37:15.222544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.554 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.554 06:37:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:55.554 [2024-11-20 06:37:15.327859] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:56.940 Initializing NVMe Controllers 00:25:56.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:56.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:56.940 Initialization complete. Launching workers. 00:25:56.940 ======================================================== 00:25:56.940 Latency(us) 00:25:56.940 Device Information : IOPS MiB/s Average min max 00:25:56.940 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.92 15.99 32391.30 8004.57 63853.47 00:25:56.940 ======================================================== 00:25:56.940 Total : 127.92 15.99 32391.30 8004.57 63853.47 00:25:56.940 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.940 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.940 rmmod nvme_tcp 00:25:56.940 rmmod nvme_fabrics 00:25:56.940 rmmod nvme_keyring 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2756690 ']' 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2756690 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 2756690 ']' 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 2756690 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2756690 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2756690' 00:25:57.201 killing process with pid 2756690 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 2756690 00:25:57.201 06:37:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 2756690 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.201 06:37:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.747 00:25:59.747 real 0m12.901s 00:25:59.747 user 0m5.225s 00:25:59.747 sys 0m6.232s 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:59.747 ************************************ 00:25:59.747 END TEST nvmf_wait_for_buf 00:25:59.747 ************************************ 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.747 06:37:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:07.894 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:07.895 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:07.895 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:07.895 Found net devices under 0000:31:00.0: cvl_0_0 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:07.895 Found net devices under 0000:31:00.1: cvl_0_1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:07.895 ************************************ 00:26:07.895 START TEST nvmf_perf_adq 00:26:07.895 ************************************ 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:07.895 * Looking for test storage... 00:26:07.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:07.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.895 --rc genhtml_branch_coverage=1 00:26:07.895 --rc genhtml_function_coverage=1 00:26:07.895 --rc genhtml_legend=1 00:26:07.895 --rc geninfo_all_blocks=1 00:26:07.895 --rc geninfo_unexecuted_blocks=1 00:26:07.895 00:26:07.895 ' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:07.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.895 --rc genhtml_branch_coverage=1 00:26:07.895 --rc genhtml_function_coverage=1 00:26:07.895 --rc genhtml_legend=1 00:26:07.895 --rc geninfo_all_blocks=1 00:26:07.895 --rc geninfo_unexecuted_blocks=1 00:26:07.895 00:26:07.895 ' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:07.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.895 --rc genhtml_branch_coverage=1 00:26:07.895 --rc genhtml_function_coverage=1 00:26:07.895 --rc genhtml_legend=1 00:26:07.895 --rc geninfo_all_blocks=1 00:26:07.895 --rc geninfo_unexecuted_blocks=1 00:26:07.895 00:26:07.895 ' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:07.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.895 --rc genhtml_branch_coverage=1 00:26:07.895 --rc genhtml_function_coverage=1 00:26:07.895 --rc genhtml_legend=1 00:26:07.895 --rc geninfo_all_blocks=1 00:26:07.895 --rc geninfo_unexecuted_blocks=1 00:26:07.895 00:26:07.895 ' 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.895 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.896 06:37:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.482 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:14.483 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:14.483 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:14.483 Found net devices under 0000:31:00.0: cvl_0_0 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:14.483 Found net devices under 0000:31:00.1: cvl_0_1 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:14.483 06:37:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:15.868 06:37:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:17.964 06:37:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:23.257 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.257 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:23.258 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:23.258 Found net devices under 0000:31:00.0: cvl_0_0 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:23.258 Found net devices under 0000:31:00.1: cvl_0_1 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:23.258 06:37:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:23.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:26:23.258 00:26:23.258 --- 10.0.0.2 ping statistics --- 00:26:23.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.258 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:26:23.258 00:26:23.258 --- 10.0.0.1 ping statistics --- 00:26:23.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.258 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2767059 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2767059 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2767059 ']' 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:23.258 06:37:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:23.520 [2024-11-20 06:37:43.192202] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:23.520 [2024-11-20 06:37:43.192270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.520 [2024-11-20 06:37:43.294710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.520 [2024-11-20 06:37:43.348956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.520 [2024-11-20 06:37:43.349014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.520 [2024-11-20 06:37:43.349023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.520 [2024-11-20 06:37:43.349030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.520 [2024-11-20 06:37:43.349036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.520 [2024-11-20 06:37:43.351125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.520 [2024-11-20 06:37:43.351389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.520 [2024-11-20 06:37:43.351553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.520 [2024-11-20 06:37:43.351555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 [2024-11-20 06:37:44.225590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 Malloc1 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:24.463 [2024-11-20 06:37:44.302412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2767249 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:24.463 06:37:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:26:27.013 "tick_rate": 2400000000, 00:26:27.013 "poll_groups": [ 00:26:27.013 { 00:26:27.013 "name": "nvmf_tgt_poll_group_000", 00:26:27.013 "admin_qpairs": 1, 00:26:27.013 "io_qpairs": 1, 00:26:27.013 "current_admin_qpairs": 1, 00:26:27.013 "current_io_qpairs": 1, 00:26:27.013 "pending_bdev_io": 0, 00:26:27.013 "completed_nvme_io": 15920, 00:26:27.013 "transports": [ 00:26:27.013 { 00:26:27.013 "trtype": "TCP" 00:26:27.013 } 00:26:27.013 ] 00:26:27.013 }, 00:26:27.013 { 00:26:27.013 "name": "nvmf_tgt_poll_group_001", 00:26:27.013 "admin_qpairs": 0, 00:26:27.013 "io_qpairs": 1, 00:26:27.013 "current_admin_qpairs": 0, 00:26:27.013 "current_io_qpairs": 1, 00:26:27.013 "pending_bdev_io": 0, 00:26:27.013 "completed_nvme_io": 15642, 00:26:27.013 "transports": [ 00:26:27.013 { 00:26:27.013 "trtype": "TCP" 00:26:27.013 } 00:26:27.013 ] 00:26:27.013 }, 00:26:27.013 { 00:26:27.013 "name": "nvmf_tgt_poll_group_002", 00:26:27.013 "admin_qpairs": 0, 00:26:27.013 "io_qpairs": 1, 00:26:27.013 "current_admin_qpairs": 0, 00:26:27.013 "current_io_qpairs": 1, 00:26:27.013 "pending_bdev_io": 0, 00:26:27.013 "completed_nvme_io": 15262, 00:26:27.013 "transports": [ 00:26:27.013 { 00:26:27.013 "trtype": "TCP" 00:26:27.013 } 00:26:27.013 ] 00:26:27.013 }, 00:26:27.013 { 00:26:27.013 "name": "nvmf_tgt_poll_group_003", 00:26:27.013 "admin_qpairs": 0, 00:26:27.013 "io_qpairs": 1, 00:26:27.013 "current_admin_qpairs": 0, 00:26:27.013 "current_io_qpairs": 1, 00:26:27.013 "pending_bdev_io": 0, 00:26:27.013 "completed_nvme_io": 15538, 00:26:27.013 "transports": [ 00:26:27.013 { 00:26:27.013 "trtype": "TCP" 00:26:27.013 } 00:26:27.013 ] 00:26:27.013 } 00:26:27.013 ] 00:26:27.013 }' 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:26:27.013 06:37:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2767249 00:26:35.152 Initializing NVMe Controllers 00:26:35.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:35.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:35.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:35.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:35.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:35.152 Initialization complete. Launching workers. 00:26:35.152 ======================================================== 00:26:35.152 Latency(us) 00:26:35.152 Device Information : IOPS MiB/s Average min max 00:26:35.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12674.60 49.51 5049.64 1693.78 12698.95 00:26:35.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12633.10 49.35 5066.16 1244.76 13580.28 00:26:35.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12713.10 49.66 5034.49 1240.08 13663.99 00:26:35.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12844.60 50.17 4982.04 1219.46 13019.48 00:26:35.152 ======================================================== 00:26:35.153 Total : 50865.39 198.69 5032.88 1219.46 13663.99 00:26:35.153 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.153 rmmod nvme_tcp 00:26:35.153 rmmod nvme_fabrics 00:26:35.153 rmmod nvme_keyring 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2767059 ']' 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2767059 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2767059 ']' 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2767059 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2767059 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2767059' 00:26:35.153 killing process with pid 2767059 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2767059 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2767059 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.153 06:37:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.070 06:37:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.070 06:37:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:26:37.070 06:37:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:37.070 06:37:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:38.987 06:37:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:40.903 06:38:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:46.194 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:46.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:46.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:46.195 Found net devices under 0000:31:00.0: cvl_0_0 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:46.195 Found net devices under 0000:31:00.1: cvl_0_1 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:46.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:26:46.195 00:26:46.195 --- 10.0.0.2 ping statistics --- 00:26:46.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.195 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:26:46.195 00:26:46.195 --- 10.0.0.1 ping statistics --- 00:26:46.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.195 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:46.195 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:46.196 06:38:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:46.196 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:46.196 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:46.196 net.core.busy_poll = 1 00:26:46.196 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:46.196 net.core.busy_read = 1 00:26:46.196 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:46.196 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2771950 00:26:46.458 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2771950 00:26:46.459 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:46.459 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2771950 ']' 00:26:46.459 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.459 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:46.459 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.459 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:46.459 06:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:46.459 [2024-11-20 06:38:06.350736] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:46.459 [2024-11-20 06:38:06.350812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.720 [2024-11-20 06:38:06.450925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.720 [2024-11-20 06:38:06.504199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.720 [2024-11-20 06:38:06.504251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.720 [2024-11-20 06:38:06.504260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.720 [2024-11-20 06:38:06.504266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.720 [2024-11-20 06:38:06.504273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.720 [2024-11-20 06:38:06.506363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.720 [2024-11-20 06:38:06.506525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:46.720 [2024-11-20 06:38:06.506684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.720 [2024-11-20 06:38:06.506684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.293 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:47.293 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:26:47.293 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.293 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.293 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 [2024-11-20 06:38:07.380884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 Malloc1 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.554 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:47.555 [2024-11-20 06:38:07.454576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2772067 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:47.555 06:38:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:50.123 "tick_rate": 2400000000, 00:26:50.123 "poll_groups": [ 00:26:50.123 { 00:26:50.123 "name": "nvmf_tgt_poll_group_000", 00:26:50.123 "admin_qpairs": 1, 00:26:50.123 "io_qpairs": 3, 00:26:50.123 "current_admin_qpairs": 1, 00:26:50.123 "current_io_qpairs": 3, 00:26:50.123 "pending_bdev_io": 0, 00:26:50.123 "completed_nvme_io": 28344, 00:26:50.123 "transports": [ 00:26:50.123 { 00:26:50.123 "trtype": "TCP" 00:26:50.123 } 00:26:50.123 ] 00:26:50.123 }, 00:26:50.123 { 00:26:50.123 "name": "nvmf_tgt_poll_group_001", 00:26:50.123 "admin_qpairs": 0, 00:26:50.123 "io_qpairs": 1, 00:26:50.123 "current_admin_qpairs": 0, 00:26:50.123 "current_io_qpairs": 1, 00:26:50.123 "pending_bdev_io": 0, 00:26:50.123 "completed_nvme_io": 25319, 00:26:50.123 "transports": [ 00:26:50.123 { 00:26:50.123 "trtype": "TCP" 00:26:50.123 } 00:26:50.123 ] 00:26:50.123 }, 00:26:50.123 { 00:26:50.123 "name": "nvmf_tgt_poll_group_002", 00:26:50.123 "admin_qpairs": 0, 00:26:50.123 "io_qpairs": 0, 00:26:50.123 "current_admin_qpairs": 0, 00:26:50.123 "current_io_qpairs": 0, 00:26:50.123 "pending_bdev_io": 0, 00:26:50.123 "completed_nvme_io": 0, 00:26:50.123 "transports": [ 00:26:50.123 { 00:26:50.123 "trtype": "TCP" 00:26:50.123 } 00:26:50.123 ] 00:26:50.123 }, 00:26:50.123 { 00:26:50.123 "name": "nvmf_tgt_poll_group_003", 00:26:50.123 "admin_qpairs": 0, 00:26:50.123 "io_qpairs": 0, 00:26:50.123 "current_admin_qpairs": 0, 00:26:50.123 "current_io_qpairs": 0, 00:26:50.123 "pending_bdev_io": 0, 00:26:50.123 "completed_nvme_io": 0, 00:26:50.123 "transports": [ 00:26:50.123 { 00:26:50.123 "trtype": "TCP" 00:26:50.123 } 00:26:50.123 ] 00:26:50.123 } 00:26:50.123 ] 00:26:50.123 }' 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:26:50.123 06:38:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2772067 00:26:58.266 Initializing NVMe Controllers 00:26:58.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:58.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:58.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:58.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:58.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:58.266 Initialization complete. Launching workers. 00:26:58.266 ======================================================== 00:26:58.266 Latency(us) 00:26:58.266 Device Information : IOPS MiB/s Average min max 00:26:58.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5552.46 21.69 11529.47 1427.08 57598.06 00:26:58.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9592.13 37.47 6673.01 1136.25 59708.38 00:26:58.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5257.36 20.54 12225.56 1397.61 55102.31 00:26:58.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 17493.87 68.34 3658.32 1106.99 44937.35 00:26:58.267 ======================================================== 00:26:58.267 Total : 37895.81 148.03 6763.22 1106.99 59708.38 00:26:58.267 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:58.267 rmmod nvme_tcp 00:26:58.267 rmmod nvme_fabrics 00:26:58.267 rmmod nvme_keyring 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2771950 ']' 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2771950 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2771950 ']' 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2771950 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2771950 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2771950' 00:26:58.267 killing process with pid 2771950 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2771950 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2771950 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.267 06:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:00.180 00:27:00.180 real 0m53.481s 00:27:00.180 user 2m50.013s 00:27:00.180 sys 0m11.826s 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:00.180 ************************************ 00:27:00.180 END TEST nvmf_perf_adq 00:27:00.180 ************************************ 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:00.180 06:38:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:00.441 ************************************ 00:27:00.441 START TEST nvmf_shutdown 00:27:00.441 ************************************ 00:27:00.441 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:00.442 * Looking for test storage... 00:27:00.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:00.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.442 --rc genhtml_branch_coverage=1 00:27:00.442 --rc genhtml_function_coverage=1 00:27:00.442 --rc genhtml_legend=1 00:27:00.442 --rc geninfo_all_blocks=1 00:27:00.442 --rc geninfo_unexecuted_blocks=1 00:27:00.442 00:27:00.442 ' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:00.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.442 --rc genhtml_branch_coverage=1 00:27:00.442 --rc genhtml_function_coverage=1 00:27:00.442 --rc genhtml_legend=1 00:27:00.442 --rc geninfo_all_blocks=1 00:27:00.442 --rc geninfo_unexecuted_blocks=1 00:27:00.442 00:27:00.442 ' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:00.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.442 --rc genhtml_branch_coverage=1 00:27:00.442 --rc genhtml_function_coverage=1 00:27:00.442 --rc genhtml_legend=1 00:27:00.442 --rc geninfo_all_blocks=1 00:27:00.442 --rc geninfo_unexecuted_blocks=1 00:27:00.442 00:27:00.442 ' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:00.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.442 --rc genhtml_branch_coverage=1 00:27:00.442 --rc genhtml_function_coverage=1 00:27:00.442 --rc genhtml_legend=1 00:27:00.442 --rc geninfo_all_blocks=1 00:27:00.442 --rc geninfo_unexecuted_blocks=1 00:27:00.442 00:27:00.442 ' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:00.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:00.442 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:00.443 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:00.443 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:00.443 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:00.443 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:00.443 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:00.703 ************************************ 00:27:00.703 START TEST nvmf_shutdown_tc1 00:27:00.703 ************************************ 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:00.703 06:38:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:08.853 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:08.853 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:08.853 Found net devices under 0000:31:00.0: cvl_0_0 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:08.853 Found net devices under 0000:31:00.1: cvl_0_1 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.853 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:08.854 06:38:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:08.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:27:08.854 00:27:08.854 --- 10.0.0.2 ping statistics --- 00:27:08.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.854 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:27:08.854 00:27:08.854 --- 10.0.0.1 ping statistics --- 00:27:08.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.854 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2778551 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2778551 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2778551 ']' 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:08.854 06:38:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:08.854 [2024-11-20 06:38:28.236444] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:08.854 [2024-11-20 06:38:28.236508] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.854 [2024-11-20 06:38:28.335921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.854 [2024-11-20 06:38:28.387486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.854 [2024-11-20 06:38:28.387539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.854 [2024-11-20 06:38:28.387549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.854 [2024-11-20 06:38:28.387556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.854 [2024-11-20 06:38:28.387563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.854 [2024-11-20 06:38:28.390004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.854 [2024-11-20 06:38:28.390165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.854 [2024-11-20 06:38:28.390307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:08.854 [2024-11-20 06:38:28.390308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.426 [2024-11-20 06:38:29.119937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.426 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.426 Malloc1 00:27:09.426 [2024-11-20 06:38:29.254440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.426 Malloc2 00:27:09.426 Malloc3 00:27:09.687 Malloc4 00:27:09.687 Malloc5 00:27:09.687 Malloc6 00:27:09.687 Malloc7 00:27:09.687 Malloc8 00:27:09.949 Malloc9 00:27:09.949 Malloc10 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2778937 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2778937 /var/tmp/bdevperf.sock 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2778937 ']' 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.949 "hdgst": ${hdgst:-false}, 00:27:09.949 "ddgst": ${ddgst:-false} 00:27:09.949 }, 00:27:09.949 "method": "bdev_nvme_attach_controller" 00:27:09.949 } 00:27:09.949 EOF 00:27:09.949 )") 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.949 "hdgst": ${hdgst:-false}, 00:27:09.949 "ddgst": ${ddgst:-false} 00:27:09.949 }, 00:27:09.949 "method": "bdev_nvme_attach_controller" 00:27:09.949 } 00:27:09.949 EOF 00:27:09.949 )") 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.949 "hdgst": ${hdgst:-false}, 00:27:09.949 "ddgst": ${ddgst:-false} 00:27:09.949 }, 00:27:09.949 "method": "bdev_nvme_attach_controller" 00:27:09.949 } 00:27:09.949 EOF 00:27:09.949 )") 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.949 "hdgst": ${hdgst:-false}, 00:27:09.949 "ddgst": ${ddgst:-false} 00:27:09.949 }, 00:27:09.949 "method": "bdev_nvme_attach_controller" 00:27:09.949 } 00:27:09.949 EOF 00:27:09.949 )") 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.949 "hdgst": ${hdgst:-false}, 00:27:09.949 "ddgst": ${ddgst:-false} 00:27:09.949 }, 00:27:09.949 "method": "bdev_nvme_attach_controller" 00:27:09.949 } 00:27:09.949 EOF 00:27:09.949 )") 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.949 "hdgst": ${hdgst:-false}, 00:27:09.949 "ddgst": ${ddgst:-false} 00:27:09.949 }, 00:27:09.949 "method": "bdev_nvme_attach_controller" 00:27:09.949 } 00:27:09.949 EOF 00:27:09.949 )") 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.949 [2024-11-20 06:38:29.774730] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:09.949 [2024-11-20 06:38:29.774814] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.949 "hdgst": ${hdgst:-false}, 00:27:09.949 "ddgst": ${ddgst:-false} 00:27:09.949 }, 00:27:09.949 "method": "bdev_nvme_attach_controller" 00:27:09.949 } 00:27:09.949 EOF 00:27:09.949 )") 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.949 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.949 { 00:27:09.949 "params": { 00:27:09.949 "name": "Nvme$subsystem", 00:27:09.949 "trtype": "$TEST_TRANSPORT", 00:27:09.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.949 "adrfam": "ipv4", 00:27:09.949 "trsvcid": "$NVMF_PORT", 00:27:09.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.950 "hdgst": ${hdgst:-false}, 00:27:09.950 "ddgst": ${ddgst:-false} 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 } 00:27:09.950 EOF 00:27:09.950 )") 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.950 { 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme$subsystem", 00:27:09.950 "trtype": "$TEST_TRANSPORT", 00:27:09.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "$NVMF_PORT", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.950 "hdgst": ${hdgst:-false}, 00:27:09.950 "ddgst": ${ddgst:-false} 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 } 00:27:09.950 EOF 00:27:09.950 )") 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.950 { 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme$subsystem", 00:27:09.950 "trtype": "$TEST_TRANSPORT", 00:27:09.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "$NVMF_PORT", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.950 "hdgst": ${hdgst:-false}, 00:27:09.950 "ddgst": ${ddgst:-false} 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 } 00:27:09.950 EOF 00:27:09.950 )") 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:09.950 06:38:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme1", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme2", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme3", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme4", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme5", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme6", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme7", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme8", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme9", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 },{ 00:27:09.950 "params": { 00:27:09.950 "name": "Nvme10", 00:27:09.950 "trtype": "tcp", 00:27:09.950 "traddr": "10.0.0.2", 00:27:09.950 "adrfam": "ipv4", 00:27:09.950 "trsvcid": "4420", 00:27:09.950 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:09.950 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:09.950 "hdgst": false, 00:27:09.950 "ddgst": false 00:27:09.950 }, 00:27:09.950 "method": "bdev_nvme_attach_controller" 00:27:09.950 }' 00:27:10.211 [2024-11-20 06:38:29.872508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.211 [2024-11-20 06:38:29.925753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2778937 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:11.593 06:38:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:12.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2778937 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2778551 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.534 { 00:27:12.534 "params": { 00:27:12.534 "name": "Nvme$subsystem", 00:27:12.534 "trtype": "$TEST_TRANSPORT", 00:27:12.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.534 "adrfam": "ipv4", 00:27:12.534 "trsvcid": "$NVMF_PORT", 00:27:12.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.534 "hdgst": ${hdgst:-false}, 00:27:12.534 "ddgst": ${ddgst:-false} 00:27:12.534 }, 00:27:12.534 "method": "bdev_nvme_attach_controller" 00:27:12.534 } 00:27:12.534 EOF 00:27:12.534 )") 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.534 { 00:27:12.534 "params": { 00:27:12.534 "name": "Nvme$subsystem", 00:27:12.534 "trtype": "$TEST_TRANSPORT", 00:27:12.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.534 "adrfam": "ipv4", 00:27:12.534 "trsvcid": "$NVMF_PORT", 00:27:12.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.534 "hdgst": ${hdgst:-false}, 00:27:12.534 "ddgst": ${ddgst:-false} 00:27:12.534 }, 00:27:12.534 "method": "bdev_nvme_attach_controller" 00:27:12.534 } 00:27:12.534 EOF 00:27:12.534 )") 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.534 { 00:27:12.534 "params": { 00:27:12.534 "name": "Nvme$subsystem", 00:27:12.534 "trtype": "$TEST_TRANSPORT", 00:27:12.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.534 "adrfam": "ipv4", 00:27:12.534 "trsvcid": "$NVMF_PORT", 00:27:12.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.534 "hdgst": ${hdgst:-false}, 00:27:12.534 "ddgst": ${ddgst:-false} 00:27:12.534 }, 00:27:12.534 "method": "bdev_nvme_attach_controller" 00:27:12.534 } 00:27:12.534 EOF 00:27:12.534 )") 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.534 { 00:27:12.534 "params": { 00:27:12.534 "name": "Nvme$subsystem", 00:27:12.534 "trtype": "$TEST_TRANSPORT", 00:27:12.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.534 "adrfam": "ipv4", 00:27:12.534 "trsvcid": "$NVMF_PORT", 00:27:12.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.534 "hdgst": ${hdgst:-false}, 00:27:12.534 "ddgst": ${ddgst:-false} 00:27:12.534 }, 00:27:12.534 "method": "bdev_nvme_attach_controller" 00:27:12.534 } 00:27:12.534 EOF 00:27:12.534 )") 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.534 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.534 { 00:27:12.534 "params": { 00:27:12.534 "name": "Nvme$subsystem", 00:27:12.534 "trtype": "$TEST_TRANSPORT", 00:27:12.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.534 "adrfam": "ipv4", 00:27:12.534 "trsvcid": "$NVMF_PORT", 00:27:12.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.534 "hdgst": ${hdgst:-false}, 00:27:12.534 "ddgst": ${ddgst:-false} 00:27:12.534 }, 00:27:12.534 "method": "bdev_nvme_attach_controller" 00:27:12.534 } 00:27:12.534 EOF 00:27:12.534 )") 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.535 { 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme$subsystem", 00:27:12.535 "trtype": "$TEST_TRANSPORT", 00:27:12.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "$NVMF_PORT", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.535 "hdgst": ${hdgst:-false}, 00:27:12.535 "ddgst": ${ddgst:-false} 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 } 00:27:12.535 EOF 00:27:12.535 )") 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.535 { 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme$subsystem", 00:27:12.535 "trtype": "$TEST_TRANSPORT", 00:27:12.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "$NVMF_PORT", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.535 "hdgst": ${hdgst:-false}, 00:27:12.535 "ddgst": ${ddgst:-false} 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 } 00:27:12.535 EOF 00:27:12.535 )") 00:27:12.535 [2024-11-20 06:38:32.248117] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:12.535 [2024-11-20 06:38:32.248176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779306 ] 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.535 { 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme$subsystem", 00:27:12.535 "trtype": "$TEST_TRANSPORT", 00:27:12.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "$NVMF_PORT", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.535 "hdgst": ${hdgst:-false}, 00:27:12.535 "ddgst": ${ddgst:-false} 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 } 00:27:12.535 EOF 00:27:12.535 )") 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.535 { 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme$subsystem", 00:27:12.535 "trtype": "$TEST_TRANSPORT", 00:27:12.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "$NVMF_PORT", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.535 "hdgst": ${hdgst:-false}, 00:27:12.535 "ddgst": ${ddgst:-false} 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 } 00:27:12.535 EOF 00:27:12.535 )") 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:12.535 { 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme$subsystem", 00:27:12.535 "trtype": "$TEST_TRANSPORT", 00:27:12.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "$NVMF_PORT", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.535 "hdgst": ${hdgst:-false}, 00:27:12.535 "ddgst": ${ddgst:-false} 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 } 00:27:12.535 EOF 00:27:12.535 )") 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:12.535 06:38:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme1", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme2", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme3", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme4", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme5", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme6", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme7", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme8", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme9", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.535 "trsvcid": "4420", 00:27:12.535 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:12.535 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:12.535 "hdgst": false, 00:27:12.535 "ddgst": false 00:27:12.535 }, 00:27:12.535 "method": "bdev_nvme_attach_controller" 00:27:12.535 },{ 00:27:12.535 "params": { 00:27:12.535 "name": "Nvme10", 00:27:12.535 "trtype": "tcp", 00:27:12.535 "traddr": "10.0.0.2", 00:27:12.535 "adrfam": "ipv4", 00:27:12.536 "trsvcid": "4420", 00:27:12.536 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:12.536 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:12.536 "hdgst": false, 00:27:12.536 "ddgst": false 00:27:12.536 }, 00:27:12.536 "method": "bdev_nvme_attach_controller" 00:27:12.536 }' 00:27:12.536 [2024-11-20 06:38:32.337833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.536 [2024-11-20 06:38:32.373509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.920 Running I/O for 1 seconds... 00:27:14.862 1860.00 IOPS, 116.25 MiB/s 00:27:14.862 Latency(us) 00:27:14.862 [2024-11-20T05:38:34.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.862 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme1n1 : 1.14 225.22 14.08 0.00 0.00 281387.73 15510.19 255153.49 00:27:14.862 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme2n1 : 1.17 219.63 13.73 0.00 0.00 282448.00 17476.27 256901.12 00:27:14.862 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme3n1 : 1.12 237.44 14.84 0.00 0.00 242887.75 12451.84 239424.85 00:27:14.862 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme4n1 : 1.18 272.22 17.01 0.00 0.00 221143.72 27415.89 248162.99 00:27:14.862 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme5n1 : 1.12 228.61 14.29 0.00 0.00 258270.61 13271.04 244667.73 00:27:14.862 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme6n1 : 1.14 224.26 14.02 0.00 0.00 258969.81 19770.03 253405.87 00:27:14.862 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme7n1 : 1.13 226.13 14.13 0.00 0.00 251822.51 12397.23 249910.61 00:27:14.862 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme8n1 : 1.18 273.85 17.12 0.00 0.00 205295.31 6307.84 249910.61 00:27:14.862 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme9n1 : 1.20 267.37 16.71 0.00 0.00 206772.65 13707.95 253405.87 00:27:14.862 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:14.862 Verification LBA range: start 0x0 length 0x400 00:27:14.862 Nvme10n1 : 1.18 221.24 13.83 0.00 0.00 243881.33 1624.75 272629.76 00:27:14.862 [2024-11-20T05:38:34.782Z] =================================================================================================================== 00:27:14.862 [2024-11-20T05:38:34.782Z] Total : 2395.97 149.75 0.00 0.00 242848.11 1624.75 272629.76 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:15.123 rmmod nvme_tcp 00:27:15.123 rmmod nvme_fabrics 00:27:15.123 rmmod nvme_keyring 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2778551 ']' 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2778551 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2778551 ']' 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2778551 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:15.123 06:38:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2778551 00:27:15.123 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:15.123 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:15.123 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2778551' 00:27:15.123 killing process with pid 2778551 00:27:15.123 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2778551 00:27:15.123 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2778551 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.384 06:38:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.929 00:27:17.929 real 0m16.937s 00:27:17.929 user 0m33.288s 00:27:17.929 sys 0m7.097s 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:17.929 ************************************ 00:27:17.929 END TEST nvmf_shutdown_tc1 00:27:17.929 ************************************ 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:17.929 ************************************ 00:27:17.929 START TEST nvmf_shutdown_tc2 00:27:17.929 ************************************ 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:17.929 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:17.930 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:17.930 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:17.930 Found net devices under 0000:31:00.0: cvl_0_0 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:17.930 Found net devices under 0000:31:00.1: cvl_0_1 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:17.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:27:17.930 00:27:17.930 --- 10.0.0.2 ping statistics --- 00:27:17.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.930 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:27:17.930 00:27:17.930 --- 10.0.0.1 ping statistics --- 00:27:17.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.930 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2780432 00:27:17.930 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2780432 00:27:17.931 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:17.931 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2780432 ']' 00:27:17.931 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.931 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:17.931 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.931 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:17.931 06:38:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.191 [2024-11-20 06:38:37.857468] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:18.191 [2024-11-20 06:38:37.857537] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.191 [2024-11-20 06:38:37.953699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.191 [2024-11-20 06:38:37.989246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.191 [2024-11-20 06:38:37.989277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.191 [2024-11-20 06:38:37.989283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.191 [2024-11-20 06:38:37.989288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.191 [2024-11-20 06:38:37.989292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.191 [2024-11-20 06:38:37.990635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.191 [2024-11-20 06:38:37.990799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.191 [2024-11-20 06:38:37.990955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.191 [2024-11-20 06:38:37.990957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.761 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:18.761 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:18.761 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.761 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:18.761 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.022 [2024-11-20 06:38:38.711013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.022 06:38:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.022 Malloc1 00:27:19.022 [2024-11-20 06:38:38.821712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.022 Malloc2 00:27:19.022 Malloc3 00:27:19.022 Malloc4 00:27:19.282 Malloc5 00:27:19.282 Malloc6 00:27:19.282 Malloc7 00:27:19.282 Malloc8 00:27:19.282 Malloc9 00:27:19.282 Malloc10 00:27:19.282 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.282 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:19.282 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:19.282 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2780802 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2780802 /var/tmp/bdevperf.sock 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2780802 ']' 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:19.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.542 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.542 { 00:27:19.542 "params": { 00:27:19.542 "name": "Nvme$subsystem", 00:27:19.542 "trtype": "$TEST_TRANSPORT", 00:27:19.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.542 "adrfam": "ipv4", 00:27:19.542 "trsvcid": "$NVMF_PORT", 00:27:19.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 [2024-11-20 06:38:39.266052] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:19.543 [2024-11-20 06:38:39.266106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780802 ] 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:19.543 { 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme$subsystem", 00:27:19.543 "trtype": "$TEST_TRANSPORT", 00:27:19.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "$NVMF_PORT", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.543 "hdgst": ${hdgst:-false}, 00:27:19.543 "ddgst": ${ddgst:-false} 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 } 00:27:19.543 EOF 00:27:19.543 )") 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:27:19.543 06:38:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme1", 00:27:19.543 "trtype": "tcp", 00:27:19.543 "traddr": "10.0.0.2", 00:27:19.543 "adrfam": "ipv4", 00:27:19.543 "trsvcid": "4420", 00:27:19.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:19.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:19.543 "hdgst": false, 00:27:19.543 "ddgst": false 00:27:19.543 }, 00:27:19.543 "method": "bdev_nvme_attach_controller" 00:27:19.543 },{ 00:27:19.543 "params": { 00:27:19.543 "name": "Nvme2", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme3", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme4", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme5", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme6", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme7", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme8", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme9", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 },{ 00:27:19.544 "params": { 00:27:19.544 "name": "Nvme10", 00:27:19.544 "trtype": "tcp", 00:27:19.544 "traddr": "10.0.0.2", 00:27:19.544 "adrfam": "ipv4", 00:27:19.544 "trsvcid": "4420", 00:27:19.544 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:19.544 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:19.544 "hdgst": false, 00:27:19.544 "ddgst": false 00:27:19.544 }, 00:27:19.544 "method": "bdev_nvme_attach_controller" 00:27:19.544 }' 00:27:19.544 [2024-11-20 06:38:39.355078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.544 [2024-11-20 06:38:39.391572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.929 Running I/O for 10 seconds... 00:27:20.929 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:20.929 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:20.929 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:20.929 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.929 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:21.190 06:38:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:21.451 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2780802 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2780802 ']' 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2780802 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:21.713 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2780802 00:27:21.975 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:21.975 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:21.975 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2780802' 00:27:21.975 killing process with pid 2780802 00:27:21.975 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2780802 00:27:21.975 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2780802 00:27:21.975 Received shutdown signal, test time was about 0.966825 seconds 00:27:21.975 00:27:21.975 Latency(us) 00:27:21.975 [2024-11-20T05:38:41.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.975 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.975 Verification LBA range: start 0x0 length 0x400 00:27:21.975 Nvme1n1 : 0.96 266.75 16.67 0.00 0.00 237005.01 18022.40 217579.52 00:27:21.975 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.975 Verification LBA range: start 0x0 length 0x400 00:27:21.975 Nvme2n1 : 0.97 265.12 16.57 0.00 0.00 233576.96 32549.55 253405.87 00:27:21.975 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.975 Verification LBA range: start 0x0 length 0x400 00:27:21.975 Nvme3n1 : 0.95 274.54 17.16 0.00 0.00 220667.90 8628.91 246415.36 00:27:21.975 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.975 Verification LBA range: start 0x0 length 0x400 00:27:21.975 Nvme4n1 : 0.96 265.73 16.61 0.00 0.00 222639.89 10649.60 244667.73 00:27:21.975 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.975 Verification LBA range: start 0x0 length 0x400 00:27:21.975 Nvme5n1 : 0.95 272.45 17.03 0.00 0.00 212311.80 4532.91 248162.99 00:27:21.975 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.976 Verification LBA range: start 0x0 length 0x400 00:27:21.976 Nvme6n1 : 0.94 203.69 12.73 0.00 0.00 278236.44 14964.05 251658.24 00:27:21.976 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.976 Verification LBA range: start 0x0 length 0x400 00:27:21.976 Nvme7n1 : 0.96 270.44 16.90 0.00 0.00 204546.42 3181.23 246415.36 00:27:21.976 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.976 Verification LBA range: start 0x0 length 0x400 00:27:21.976 Nvme8n1 : 0.94 205.14 12.82 0.00 0.00 261746.63 19114.67 239424.85 00:27:21.976 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.976 Verification LBA range: start 0x0 length 0x400 00:27:21.976 Nvme9n1 : 0.93 205.40 12.84 0.00 0.00 256348.73 20643.84 241172.48 00:27:21.976 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:21.976 Verification LBA range: start 0x0 length 0x400 00:27:21.976 Nvme10n1 : 0.95 201.62 12.60 0.00 0.00 255584.14 20097.71 269134.51 00:27:21.976 [2024-11-20T05:38:41.896Z] =================================================================================================================== 00:27:21.976 [2024-11-20T05:38:41.896Z] Total : 2430.87 151.93 0.00 0.00 235401.32 3181.23 269134.51 00:27:21.976 06:38:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:22.980 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2780432 00:27:22.980 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:22.980 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:22.980 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:22.981 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:22.981 rmmod nvme_tcp 00:27:23.282 rmmod nvme_fabrics 00:27:23.282 rmmod nvme_keyring 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2780432 ']' 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2780432 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2780432 ']' 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2780432 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:23.282 06:38:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2780432 00:27:23.282 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:23.282 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:23.282 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2780432' 00:27:23.282 killing process with pid 2780432 00:27:23.282 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2780432 00:27:23.282 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2780432 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.544 06:38:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.456 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.456 00:27:25.456 real 0m7.895s 00:27:25.456 user 0m23.707s 00:27:25.456 sys 0m1.356s 00:27:25.456 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:25.456 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.456 ************************************ 00:27:25.456 END TEST nvmf_shutdown_tc2 00:27:25.456 ************************************ 00:27:25.456 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:25.456 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:25.456 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:25.456 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:25.718 ************************************ 00:27:25.718 START TEST nvmf_shutdown_tc3 00:27:25.719 ************************************ 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:25.719 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:25.719 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:25.719 Found net devices under 0000:31:00.0: cvl_0_0 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:25.719 Found net devices under 0000:31:00.1: cvl_0_1 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.719 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.720 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:27:25.981 00:27:25.981 --- 10.0.0.2 ping statistics --- 00:27:25.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.981 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:27:25.981 00:27:25.981 --- 10.0.0.1 ping statistics --- 00:27:25.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.981 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2782267 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2782267 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2782267 ']' 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.981 06:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:25.981 [2024-11-20 06:38:45.856204] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:25.981 [2024-11-20 06:38:45.856263] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.243 [2024-11-20 06:38:45.954253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.243 [2024-11-20 06:38:45.984009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.243 [2024-11-20 06:38:45.984034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.243 [2024-11-20 06:38:45.984040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.243 [2024-11-20 06:38:45.984045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.243 [2024-11-20 06:38:45.984050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.243 [2024-11-20 06:38:45.985291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.243 [2024-11-20 06:38:45.985427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.243 [2024-11-20 06:38:45.985557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.243 [2024-11-20 06:38:45.985558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:26.815 [2024-11-20 06:38:46.703364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:26.815 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.076 06:38:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.076 Malloc1 00:27:27.076 [2024-11-20 06:38:46.816646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.076 Malloc2 00:27:27.076 Malloc3 00:27:27.076 Malloc4 00:27:27.076 Malloc5 00:27:27.076 Malloc6 00:27:27.337 Malloc7 00:27:27.337 Malloc8 00:27:27.337 Malloc9 00:27:27.337 Malloc10 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2782581 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2782581 /var/tmp/bdevperf.sock 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2782581 ']' 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:27.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.337 { 00:27:27.337 "params": { 00:27:27.337 "name": "Nvme$subsystem", 00:27:27.337 "trtype": "$TEST_TRANSPORT", 00:27:27.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.337 "adrfam": "ipv4", 00:27:27.337 "trsvcid": "$NVMF_PORT", 00:27:27.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.337 "hdgst": ${hdgst:-false}, 00:27:27.337 "ddgst": ${ddgst:-false} 00:27:27.337 }, 00:27:27.337 "method": "bdev_nvme_attach_controller" 00:27:27.337 } 00:27:27.337 EOF 00:27:27.337 )") 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.337 { 00:27:27.337 "params": { 00:27:27.337 "name": "Nvme$subsystem", 00:27:27.337 "trtype": "$TEST_TRANSPORT", 00:27:27.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.337 "adrfam": "ipv4", 00:27:27.337 "trsvcid": "$NVMF_PORT", 00:27:27.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.337 "hdgst": ${hdgst:-false}, 00:27:27.337 "ddgst": ${ddgst:-false} 00:27:27.337 }, 00:27:27.337 "method": "bdev_nvme_attach_controller" 00:27:27.337 } 00:27:27.337 EOF 00:27:27.337 )") 00:27:27.337 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.338 { 00:27:27.338 "params": { 00:27:27.338 "name": "Nvme$subsystem", 00:27:27.338 "trtype": "$TEST_TRANSPORT", 00:27:27.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.338 "adrfam": "ipv4", 00:27:27.338 "trsvcid": "$NVMF_PORT", 00:27:27.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.338 "hdgst": ${hdgst:-false}, 00:27:27.338 "ddgst": ${ddgst:-false} 00:27:27.338 }, 00:27:27.338 "method": "bdev_nvme_attach_controller" 00:27:27.338 } 00:27:27.338 EOF 00:27:27.338 )") 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.338 { 00:27:27.338 "params": { 00:27:27.338 "name": "Nvme$subsystem", 00:27:27.338 "trtype": "$TEST_TRANSPORT", 00:27:27.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.338 "adrfam": "ipv4", 00:27:27.338 "trsvcid": "$NVMF_PORT", 00:27:27.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.338 "hdgst": ${hdgst:-false}, 00:27:27.338 "ddgst": ${ddgst:-false} 00:27:27.338 }, 00:27:27.338 "method": "bdev_nvme_attach_controller" 00:27:27.338 } 00:27:27.338 EOF 00:27:27.338 )") 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.338 { 00:27:27.338 "params": { 00:27:27.338 "name": "Nvme$subsystem", 00:27:27.338 "trtype": "$TEST_TRANSPORT", 00:27:27.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.338 "adrfam": "ipv4", 00:27:27.338 "trsvcid": "$NVMF_PORT", 00:27:27.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.338 "hdgst": ${hdgst:-false}, 00:27:27.338 "ddgst": ${ddgst:-false} 00:27:27.338 }, 00:27:27.338 "method": "bdev_nvme_attach_controller" 00:27:27.338 } 00:27:27.338 EOF 00:27:27.338 )") 00:27:27.338 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.600 { 00:27:27.600 "params": { 00:27:27.600 "name": "Nvme$subsystem", 00:27:27.600 "trtype": "$TEST_TRANSPORT", 00:27:27.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.600 "adrfam": "ipv4", 00:27:27.600 "trsvcid": "$NVMF_PORT", 00:27:27.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.600 "hdgst": ${hdgst:-false}, 00:27:27.600 "ddgst": ${ddgst:-false} 00:27:27.600 }, 00:27:27.600 "method": "bdev_nvme_attach_controller" 00:27:27.600 } 00:27:27.600 EOF 00:27:27.600 )") 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.600 [2024-11-20 06:38:47.261052] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:27.600 [2024-11-20 06:38:47.261105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782581 ] 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.600 { 00:27:27.600 "params": { 00:27:27.600 "name": "Nvme$subsystem", 00:27:27.600 "trtype": "$TEST_TRANSPORT", 00:27:27.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.600 "adrfam": "ipv4", 00:27:27.600 "trsvcid": "$NVMF_PORT", 00:27:27.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.600 "hdgst": ${hdgst:-false}, 00:27:27.600 "ddgst": ${ddgst:-false} 00:27:27.600 }, 00:27:27.600 "method": "bdev_nvme_attach_controller" 00:27:27.600 } 00:27:27.600 EOF 00:27:27.600 )") 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.600 { 00:27:27.600 "params": { 00:27:27.600 "name": "Nvme$subsystem", 00:27:27.600 "trtype": "$TEST_TRANSPORT", 00:27:27.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.600 "adrfam": "ipv4", 00:27:27.600 "trsvcid": "$NVMF_PORT", 00:27:27.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.600 "hdgst": ${hdgst:-false}, 00:27:27.600 "ddgst": ${ddgst:-false} 00:27:27.600 }, 00:27:27.600 "method": "bdev_nvme_attach_controller" 00:27:27.600 } 00:27:27.600 EOF 00:27:27.600 )") 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.600 { 00:27:27.600 "params": { 00:27:27.600 "name": "Nvme$subsystem", 00:27:27.600 "trtype": "$TEST_TRANSPORT", 00:27:27.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.600 "adrfam": "ipv4", 00:27:27.600 "trsvcid": "$NVMF_PORT", 00:27:27.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.600 "hdgst": ${hdgst:-false}, 00:27:27.600 "ddgst": ${ddgst:-false} 00:27:27.600 }, 00:27:27.600 "method": "bdev_nvme_attach_controller" 00:27:27.600 } 00:27:27.600 EOF 00:27:27.600 )") 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.600 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.600 { 00:27:27.600 "params": { 00:27:27.600 "name": "Nvme$subsystem", 00:27:27.600 "trtype": "$TEST_TRANSPORT", 00:27:27.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "$NVMF_PORT", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.601 "hdgst": ${hdgst:-false}, 00:27:27.601 "ddgst": ${ddgst:-false} 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 } 00:27:27.601 EOF 00:27:27.601 )") 00:27:27.601 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:27.601 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:27:27.601 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:27:27.601 06:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme1", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme2", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme3", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme4", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme5", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme6", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme7", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme8", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme9", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 },{ 00:27:27.601 "params": { 00:27:27.601 "name": "Nvme10", 00:27:27.601 "trtype": "tcp", 00:27:27.601 "traddr": "10.0.0.2", 00:27:27.601 "adrfam": "ipv4", 00:27:27.601 "trsvcid": "4420", 00:27:27.601 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:27.601 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:27.601 "hdgst": false, 00:27:27.601 "ddgst": false 00:27:27.601 }, 00:27:27.601 "method": "bdev_nvme_attach_controller" 00:27:27.601 }' 00:27:27.601 [2024-11-20 06:38:47.350545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.601 [2024-11-20 06:38:47.386780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.987 Running I/O for 10 seconds... 00:27:28.987 06:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:28.987 06:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:27:28.987 06:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:28.987 06:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.987 06:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:29.248 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:29.508 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:29.508 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:29.508 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:29.508 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:29.508 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.508 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:29.508 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.768 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:29.768 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:29.768 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:29.768 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:29.768 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2782267 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2782267 ']' 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2782267 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2782267 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2782267' 00:27:30.043 killing process with pid 2782267 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2782267 00:27:30.043 06:38:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2782267 00:27:30.043 [2024-11-20 06:38:49.800030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.043 [2024-11-20 06:38:49.800269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a470 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42aa0 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e71b0 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.800944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.800984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.800992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.044 [2024-11-20 06:38:49.801000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.801008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dafd0 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.044 [2024-11-20 06:38:49.801344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.801368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with t[2024-11-20 06:38:49.801374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:1he state(6) to be set 00:27:30.044 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.044 [2024-11-20 06:38:49.801382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.801388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.044 [2024-11-20 06:38:49.801398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.801408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1[2024-11-20 06:38:49.801419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.044 he state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.044 [2024-11-20 06:38:49.801431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.044 [2024-11-20 06:38:49.801436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with t[2024-11-20 06:38:49.801445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:27:30.045 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with t[2024-11-20 06:38:49.801457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1he state(6) to be set 00:27:30.045 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:38:49.801486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 he state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-11-20 06:38:49.801498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 he state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with t[2024-11-20 06:38:49.801524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:27:30.045 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with t[2024-11-20 06:38:49.801536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1he state(6) to be set 00:27:30.045 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:38:49.801601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 he state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:38:49.801621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 he state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8a940 is same with the state(6) to be set 00:27:30.045 [2024-11-20 06:38:49.801659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.045 [2024-11-20 06:38:49.801796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.045 [2024-11-20 06:38:49.801804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.801988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.801998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with t[2024-11-20 06:38:49.802341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:27:30.046 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.046 [2024-11-20 06:38:49.802378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.046 [2024-11-20 06:38:49.802383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.046 [2024-11-20 06:38:49.802391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.047 [2024-11-20 06:38:49.802393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.047 [2024-11-20 06:38:49.802404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.047 [2024-11-20 06:38:49.802414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:38:49.802419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.047 he state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.047 [2024-11-20 06:38:49.802431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.047 [2024-11-20 06:38:49.802442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.047 [2024-11-20 06:38:49.802453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with t[2024-11-20 06:38:49.802456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:27:30.047 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.047 [2024-11-20 06:38:49.802464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12[2024-11-20 06:38:49.802470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.047 he state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.047 [2024-11-20 06:38:49.802481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.047 [2024-11-20 06:38:49.802491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.047 [2024-11-20 06:38:49.802497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.047 [2024-11-20 06:38:49.802527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.802658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8ae30 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.803179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.803201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.803207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.803212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.803217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.803222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.047 [2024-11-20 06:38:49.803227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b300 is same with the state(6) to be set 00:27:30.048 [2024-11-20 06:38:49.803825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.803984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.803994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.804001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.804010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.804018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.048 [2024-11-20 06:38:49.804028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.048 [2024-11-20 06:38:49.804035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t[2024-11-20 06:38:49.804135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:12he state(6) to be set 00:27:30.049 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:38:49.804164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 he state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-11-20 06:38:49.804177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 he state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t[2024-11-20 06:38:49.804206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:27:30.049 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t[2024-11-20 06:38:49.804218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12he state(6) to be set 00:27:30.049 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t[2024-11-20 06:38:49.804246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:27:30.049 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 [2024-11-20 06:38:49.804348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t[2024-11-20 06:38:49.804355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12he state(6) to be set 00:27:30.049 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 [2024-11-20 06:38:49.804364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:38:49.804369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.049 he state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12[2024-11-20 06:38:49.804381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.049 he state(6) to be set 00:27:30.049 [2024-11-20 06:38:49.804392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:38:49.804411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 he state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12[2024-11-20 06:38:49.804424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 he state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with t[2024-11-20 06:38:49.804479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12he state(6) to be set 00:27:30.050 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8b7d0 is same with the state(6) to be set 00:27:30.050 [2024-11-20 06:38:49.804489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.050 [2024-11-20 06:38:49.804931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.050 [2024-11-20 06:38:49.804938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.804948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.804955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.804964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.804971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.804981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.804988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.805151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bca0 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bca0 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.805819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.806882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.806937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.806990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.807092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.807192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.807291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.807399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.807500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.807602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.807703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.807816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.807919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3f90 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.807970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.808078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.808127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.808178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.808225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.808277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.808329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.808380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.808393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.808430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.808481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.808534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.808580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.808630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.051 [2024-11-20 06:38:49.808682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.051 [2024-11-20 06:38:49.808734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.051 [2024-11-20 06:38:49.808843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.808854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.808893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.808946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.808992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.809045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.809141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.809251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.809352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.809451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.809557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.809658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.809765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.809863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.809918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.809964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.810069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.810169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.810270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.810371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.810479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.810578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.810677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.810785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.810891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.810947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.810993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.811092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.811191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.811297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.811396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.811495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.811596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.811701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.811816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.811919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.811975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.812026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.812079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.812127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.812181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.052 [2024-11-20 06:38:49.812226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.812278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.052 [2024-11-20 06:38:49.812326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.052 [2024-11-20 06:38:49.812388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.812434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.053 [2024-11-20 06:38:49.812486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.812534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.053 [2024-11-20 06:38:49.812588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.812635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.053 [2024-11-20 06:38:49.812691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.812744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.053 [2024-11-20 06:38:49.812811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.812856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.053 [2024-11-20 06:38:49.812909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.812956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.053 [2024-11-20 06:38:49.813017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.813900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.813955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.814934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.814986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.053 [2024-11-20 06:38:49.815953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.053 [2024-11-20 06:38:49.816004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.054 [2024-11-20 06:38:49.816051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.816103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.054 [2024-11-20 06:38:49.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.816217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.054 [2024-11-20 06:38:49.817735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:30.054 [2024-11-20 06:38:49.817771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:30.054 [2024-11-20 06:38:49.817789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42aa0 (9): Bad file descriptor 00:27:30.054 [2024-11-20 06:38:49.817801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e71b0 (9): Bad file descriptor 00:27:30.054 [2024-11-20 06:38:49.817839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.817850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.817860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.817870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.817879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.817889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.817898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.817907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.817916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc54740 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.817944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dafd0 (9): Bad file descriptor 00:27:30.054 [2024-11-20 06:38:49.817972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.817983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.817993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.818002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.818012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.818019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.818027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.827726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.827924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4480 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.831258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.831292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dadf0 is same with the state(6) to be set 00:27:30.054 [2024-11-20 06:38:49.831357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.831371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.831382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.054 [2024-11-20 06:38:49.831392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.054 [2024-11-20 06:38:49.831402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc04f20 is same with the state(6) to be set 00:27:30.055 [2024-11-20 06:38:49.831470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e2a0 is same with the state(6) to be set 00:27:30.055 [2024-11-20 06:38:49.831585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd610 is same with the state(6) to be set 00:27:30.055 [2024-11-20 06:38:49.831694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.831775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.831784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4a40 is same with the state(6) to be set 00:27:30.055 [2024-11-20 06:38:49.833792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:30.055 [2024-11-20 06:38:49.833831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e2a0 (9): Bad file descriptor 00:27:30.055 [2024-11-20 06:38:49.833883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54740 (9): Bad file descriptor 00:27:30.055 [2024-11-20 06:38:49.833926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.833937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.833948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.833957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.833967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.833977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.833987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.055 [2024-11-20 06:38:49.833996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.834005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3aeb0 is same with the state(6) to be set 00:27:30.055 [2024-11-20 06:38:49.834035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dadf0 (9): Bad file descriptor 00:27:30.055 [2024-11-20 06:38:49.834056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc04f20 (9): Bad file descriptor 00:27:30.055 [2024-11-20 06:38:49.834079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fd610 (9): Bad file descriptor 00:27:30.055 [2024-11-20 06:38:49.834100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e4a40 (9): Bad file descriptor 00:27:30.055 [2024-11-20 06:38:49.835530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-20 06:38:49.835559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e71b0 with addr=10.0.0.2, port=4420 00:27:30.055 [2024-11-20 06:38:49.835570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e71b0 is same with the state(6) to be set 00:27:30.055 [2024-11-20 06:38:49.835986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.055 [2024-11-20 06:38:49.836030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42aa0 with addr=10.0.0.2, port=4420 00:27:30.055 [2024-11-20 06:38:49.836044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42aa0 is same with the state(6) to be set 00:27:30.055 [2024-11-20 06:38:49.836132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.055 [2024-11-20 06:38:49.836148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.055 [2024-11-20 06:38:49.836165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.055 [2024-11-20 06:38:49.836175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.056 [2024-11-20 06:38:49.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.056 [2024-11-20 06:38:49.836767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.836990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.836999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.057 [2024-11-20 06:38:49.837320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.057 [2024-11-20 06:38:49.837332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.837500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.837509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.839478] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:30.058 [2024-11-20 06:38:49.841822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:30.058 [2024-11-20 06:38:49.842109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.058 [2024-11-20 06:38:49.842135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e2a0 with addr=10.0.0.2, port=4420 00:27:30.058 [2024-11-20 06:38:49.842153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e2a0 is same with the state(6) to be set 00:27:30.058 [2024-11-20 06:38:49.842170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e71b0 (9): Bad file descriptor 00:27:30.058 [2024-11-20 06:38:49.842186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42aa0 (9): Bad file descriptor 00:27:30.058 [2024-11-20 06:38:49.842313] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:30.058 [2024-11-20 06:38:49.842369] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:30.058 [2024-11-20 06:38:49.842421] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:30.058 [2024-11-20 06:38:49.842479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbeba40 is same with the state(6) to be set 00:27:30.058 [2024-11-20 06:38:49.842680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.058 [2024-11-20 06:38:49.842842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.058 [2024-11-20 06:38:49.842858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.842870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.842895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.842907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.842922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.842933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.842948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.842960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.842973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbecf90 is same with the state(6) to be set 00:27:30.059 [2024-11-20 06:38:49.843384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-20 06:38:49.843407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dafd0 with addr=10.0.0.2, port=4420 00:27:30.059 [2024-11-20 06:38:49.843418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dafd0 is same with the state(6) to be set 00:27:30.059 [2024-11-20 06:38:49.843434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e2a0 (9): Bad file descriptor 00:27:30.059 [2024-11-20 06:38:49.843448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:30.059 [2024-11-20 06:38:49.843459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:30.059 [2024-11-20 06:38:49.843472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:30.059 [2024-11-20 06:38:49.843485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:30.059 [2024-11-20 06:38:49.843498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:30.059 [2024-11-20 06:38:49.843508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:30.059 [2024-11-20 06:38:49.843519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:30.059 [2024-11-20 06:38:49.843529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:30.059 [2024-11-20 06:38:49.846986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:30.059 [2024-11-20 06:38:49.847014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:30.059 [2024-11-20 06:38:49.847033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3aeb0 (9): Bad file descriptor 00:27:30.059 [2024-11-20 06:38:49.847060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dafd0 (9): Bad file descriptor 00:27:30.059 [2024-11-20 06:38:49.847074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:30.059 [2024-11-20 06:38:49.847084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:30.059 [2024-11-20 06:38:49.847095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:30.059 [2024-11-20 06:38:49.847105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:30.059 [2024-11-20 06:38:49.847524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.059 [2024-11-20 06:38:49.847553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc54740 with addr=10.0.0.2, port=4420 00:27:30.059 [2024-11-20 06:38:49.847566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc54740 is same with the state(6) to be set 00:27:30.059 [2024-11-20 06:38:49.847586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:30.059 [2024-11-20 06:38:49.847597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:30.059 [2024-11-20 06:38:49.847608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:30.059 [2024-11-20 06:38:49.847618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:30.059 [2024-11-20 06:38:49.847663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.059 [2024-11-20 06:38:49.847961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.059 [2024-11-20 06:38:49.847976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.847988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.060 [2024-11-20 06:38:49.848869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.060 [2024-11-20 06:38:49.848882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.848897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.848909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.848920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.848929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.848940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.848951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.848962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.848971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.848982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.848991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.849003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.849011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.849023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.849031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.849043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.849052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.849061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8c2d0 is same with the state(6) to be set 00:27:30.061 [2024-11-20 06:38:49.850525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.850982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.850991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.851011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.851031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.851052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.851072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.851092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.851112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.061 [2024-11-20 06:38:49.851133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.061 [2024-11-20 06:38:49.851144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.062 [2024-11-20 06:38:49.851734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.062 [2024-11-20 06:38:49.851742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.851760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.851770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.851781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.851790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.851801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.851810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.851821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.851829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.851839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8d690 is same with the state(6) to be set 00:27:30.063 [2024-11-20 06:38:49.853354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.063 [2024-11-20 06:38:49.853802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.063 [2024-11-20 06:38:49.853813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.853984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.853995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.064 [2024-11-20 06:38:49.854247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.064 [2024-11-20 06:38:49.854258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.854650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.854660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe7930 is same with the state(6) to be set 00:27:30.065 [2024-11-20 06:38:49.856179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.856194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.856207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.856217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.856228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.856237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.856252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.856262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.856273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.856282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.856293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.065 [2024-11-20 06:38:49.856302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.065 [2024-11-20 06:38:49.856314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.066 [2024-11-20 06:38:49.856850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.066 [2024-11-20 06:38:49.856859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.856870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.856879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.856890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.856899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.856910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.856919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.856930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.856938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.856950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.856959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.856970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.856978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.856990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.856999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.067 [2024-11-20 06:38:49.857419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.067 [2024-11-20 06:38:49.857428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.068 [2024-11-20 06:38:49.857439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.068 [2024-11-20 06:38:49.857448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.068 [2024-11-20 06:38:49.857459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.068 [2024-11-20 06:38:49.857468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.068 [2024-11-20 06:38:49.857479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:30.068 [2024-11-20 06:38:49.857488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.068 [2024-11-20 06:38:49.857498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe8e70 is same with the state(6) to be set 00:27:30.068 [2024-11-20 06:38:49.860442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:30.068 [2024-11-20 06:38:49.860467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:30.068 [2024-11-20 06:38:49.860481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:30.068 [2024-11-20 06:38:49.860491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:30.068 [2024-11-20 06:38:49.860500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:30.068 [2024-11-20 06:38:49.860764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-20 06:38:49.860779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3aeb0 with addr=10.0.0.2, port=4420 00:27:30.068 [2024-11-20 06:38:49.860787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3aeb0 is same with the state(6) to be set 00:27:30.068 [2024-11-20 06:38:49.860799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54740 (9): Bad file descriptor 00:27:30.068 [2024-11-20 06:38:49.860828] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:27:30.068 [2024-11-20 06:38:49.860842] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:27:30.068 [2024-11-20 06:38:49.860858] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:27:30.068 [2024-11-20 06:38:49.860868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3aeb0 (9): Bad file descriptor 00:27:30.068 [2024-11-20 06:38:49.860938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:30.068 task offset: 26624 on job bdev=Nvme1n1 fails 00:27:30.068 00:27:30.068 Latency(us) 00:27:30.068 [2024-11-20T05:38:49.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme1n1 ended in about 0.93 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme1n1 : 0.93 206.23 12.89 68.74 0.00 230003.68 4096.00 279620.27 00:27:30.068 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme2n1 ended in about 0.96 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme2n1 : 0.96 132.84 8.30 66.42 0.00 311174.54 19988.48 288358.40 00:27:30.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme3n1 ended in about 0.97 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme3n1 : 0.97 142.57 8.91 54.36 0.00 307563.24 17803.95 309329.92 00:27:30.068 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme4n1 ended in about 0.98 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme4n1 : 0.98 130.91 8.18 65.45 0.00 303027.20 43909.12 248162.99 00:27:30.068 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme5n1 ended in about 0.98 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme5n1 : 0.98 195.80 12.24 65.27 0.00 223100.37 18568.53 255153.49 00:27:30.068 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme6n1 ended in about 0.98 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme6n1 : 0.98 136.26 8.52 65.08 0.00 283191.92 11468.80 269134.51 00:27:30.068 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme7n1 ended in about 0.96 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme7n1 : 0.96 200.39 12.52 66.80 0.00 207788.80 25668.27 276125.01 00:27:30.068 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme8n1 ended in about 0.97 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme8n1 : 0.97 194.92 12.18 3.09 0.00 271228.02 36044.80 255153.49 00:27:30.068 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme9n1 ended in about 0.97 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme9n1 : 0.97 193.59 12.10 10.30 0.00 259297.21 3741.01 272629.76 00:27:30.068 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:30.068 Job: Nvme10n1 ended in about 0.94 seconds with error 00:27:30.068 Verification LBA range: start 0x0 length 0x400 00:27:30.068 Nvme10n1 : 0.94 135.81 8.49 67.91 0.00 252619.09 16056.32 305834.67 00:27:30.068 [2024-11-20T05:38:49.988Z] =================================================================================================================== 00:27:30.068 [2024-11-20T05:38:49.988Z] Total : 1669.32 104.33 533.42 0.00 260903.46 3741.01 309329.92 00:27:30.068 [2024-11-20 06:38:49.887524] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:30.068 [2024-11-20 06:38:49.887555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:27:30.068 [2024-11-20 06:38:49.887967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-20 06:38:49.887983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42aa0 with addr=10.0.0.2, port=4420 00:27:30.068 [2024-11-20 06:38:49.887992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42aa0 is same with the state(6) to be set 00:27:30.068 [2024-11-20 06:38:49.888303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-20 06:38:49.888314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e71b0 with addr=10.0.0.2, port=4420 00:27:30.068 [2024-11-20 06:38:49.888322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e71b0 is same with the state(6) to be set 00:27:30.068 [2024-11-20 06:38:49.888486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-20 06:38:49.888496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e2a0 with addr=10.0.0.2, port=4420 00:27:30.068 [2024-11-20 06:38:49.888504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e2a0 is same with the state(6) to be set 00:27:30.068 [2024-11-20 06:38:49.888829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-20 06:38:49.888839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dadf0 with addr=10.0.0.2, port=4420 00:27:30.068 [2024-11-20 06:38:49.888846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dadf0 is same with the state(6) to be set 00:27:30.068 [2024-11-20 06:38:49.889188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-20 06:38:49.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e4a40 with addr=10.0.0.2, port=4420 00:27:30.068 [2024-11-20 06:38:49.889204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4a40 is same with the state(6) to be set 00:27:30.068 [2024-11-20 06:38:49.889214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:30.068 [2024-11-20 06:38:49.889221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:30.068 [2024-11-20 06:38:49.889229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:30.068 [2024-11-20 06:38:49.889238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:30.068 [2024-11-20 06:38:49.890344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:30.068 [2024-11-20 06:38:49.890723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.068 [2024-11-20 06:38:49.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc04f20 with addr=10.0.0.2, port=4420 00:27:30.069 [2024-11-20 06:38:49.890754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc04f20 is same with the state(6) to be set 00:27:30.069 [2024-11-20 06:38:49.890997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-20 06:38:49.891007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fd610 with addr=10.0.0.2, port=4420 00:27:30.069 [2024-11-20 06:38:49.891014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fd610 is same with the state(6) to be set 00:27:30.069 [2024-11-20 06:38:49.891025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42aa0 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e71b0 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e2a0 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dadf0 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e4a40 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.891131] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:27:30.069 [2024-11-20 06:38:49.891143] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:27:30.069 [2024-11-20 06:38:49.891156] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:27:30.069 [2024-11-20 06:38:49.891166] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:27:30.069 [2024-11-20 06:38:49.891177] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:30.069 [2024-11-20 06:38:49.891570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-20 06:38:49.891585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dafd0 with addr=10.0.0.2, port=4420 00:27:30.069 [2024-11-20 06:38:49.891592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dafd0 is same with the state(6) to be set 00:27:30.069 [2024-11-20 06:38:49.891602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc04f20 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fd610 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.891652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.891678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.891706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.891733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.891840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:30.069 [2024-11-20 06:38:49.891851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:30.069 [2024-11-20 06:38:49.891872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dafd0 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.891881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.891909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.891915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.891923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.891929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.892299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-20 06:38:49.892312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc54740 with addr=10.0.0.2, port=4420 00:27:30.069 [2024-11-20 06:38:49.892323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc54740 is same with the state(6) to be set 00:27:30.069 [2024-11-20 06:38:49.892510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.069 [2024-11-20 06:38:49.892520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3aeb0 with addr=10.0.0.2, port=4420 00:27:30.069 [2024-11-20 06:38:49.892527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3aeb0 is same with the state(6) to be set 00:27:30.069 [2024-11-20 06:38:49.892535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.892541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.892549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.892555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.892583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc54740 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.892593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3aeb0 (9): Bad file descriptor 00:27:30.069 [2024-11-20 06:38:49.892619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.892626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.892633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.892639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:30.069 [2024-11-20 06:38:49.892647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:30.069 [2024-11-20 06:38:49.892653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:30.069 [2024-11-20 06:38:49.892660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:30.069 [2024-11-20 06:38:49.892666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:30.330 06:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:27:31.273 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2782581 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2782581 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2782581 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.274 rmmod nvme_tcp 00:27:31.274 rmmod nvme_fabrics 00:27:31.274 rmmod nvme_keyring 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2782267 ']' 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2782267 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2782267 ']' 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2782267 00:27:31.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2782267) - No such process 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2782267 is not found' 00:27:31.274 Process with pid 2782267 is not found 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.274 06:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.819 00:27:33.819 real 0m7.849s 00:27:33.819 user 0m19.238s 00:27:33.819 sys 0m1.288s 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.819 ************************************ 00:27:33.819 END TEST nvmf_shutdown_tc3 00:27:33.819 ************************************ 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:33.819 ************************************ 00:27:33.819 START TEST nvmf_shutdown_tc4 00:27:33.819 ************************************ 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.819 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:33.820 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:33.820 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:33.820 Found net devices under 0000:31:00.0: cvl_0_0 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:33.820 Found net devices under 0000:31:00.1: cvl_0_1 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:27:33.820 00:27:33.820 --- 10.0.0.2 ping statistics --- 00:27:33.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.820 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:27:33.820 00:27:33.820 --- 10.0.0.1 ping statistics --- 00:27:33.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.820 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2783795 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2783795 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2783795 ']' 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.820 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:33.821 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.821 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:33.821 06:38:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:34.081 [2024-11-20 06:38:53.787003] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:34.082 [2024-11-20 06:38:53.787066] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.082 [2024-11-20 06:38:53.889973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.082 [2024-11-20 06:38:53.942880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.082 [2024-11-20 06:38:53.942927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.082 [2024-11-20 06:38:53.942936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.082 [2024-11-20 06:38:53.942944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.082 [2024-11-20 06:38:53.942950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.082 [2024-11-20 06:38:53.945038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.082 [2024-11-20 06:38:53.945196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.082 [2024-11-20 06:38:53.945400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.082 [2024-11-20 06:38:53.945400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:35.024 [2024-11-20 06:38:54.639002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.024 06:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:35.024 Malloc1 00:27:35.024 [2024-11-20 06:38:54.750006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.024 Malloc2 00:27:35.024 Malloc3 00:27:35.024 Malloc4 00:27:35.024 Malloc5 00:27:35.024 Malloc6 00:27:35.285 Malloc7 00:27:35.285 Malloc8 00:27:35.285 Malloc9 00:27:35.285 Malloc10 00:27:35.285 06:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.285 06:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:35.285 06:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.285 06:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:35.285 06:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2784174 00:27:35.285 06:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:27:35.285 06:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:27:35.545 [2024-11-20 06:38:55.238511] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2783795 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2783795 ']' 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2783795 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2783795 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2783795' 00:27:40.840 killing process with pid 2783795 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2783795 00:27:40.840 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2783795 00:27:40.840 [2024-11-20 06:39:00.232993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999c00 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999c00 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999c00 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999c00 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999c00 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999c00 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a0d0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.233706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199a5a0 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.234313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999730 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.234342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999730 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.234350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999730 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.234357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999730 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.234364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999730 is same with the state(6) to be set 00:27:40.840 [2024-11-20 06:39:00.234371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999730 is same with the state(6) to be set 00:27:40.840 Write completed with error (sct=0, sc=8) 00:27:40.840 Write completed with error (sct=0, sc=8) 00:27:40.840 starting I/O failed: -6 00:27:40.840 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 [2024-11-20 06:39:00.239716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 [2024-11-20 06:39:00.240621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.841 [2024-11-20 06:39:00.240634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dda90 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dda90 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dda90 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dda90 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dda90 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dda90 is same with the state(6) to be set 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 [2024-11-20 06:39:00.240878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with the state(6) to be set 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 [2024-11-20 06:39:00.240893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with starting I/O failed: -6 00:27:40.841 the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with the state(6) to be set 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 [2024-11-20 06:39:00.240905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with the state(6) to be set 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 [2024-11-20 06:39:00.240922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.240927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddf60 is same with the state(6) to be set 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 starting I/O failed: -6 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 [2024-11-20 06:39:00.241100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with starting I/O failed: -6 00:27:40.841 the state(6) to be set 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 [2024-11-20 06:39:00.241122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with starting I/O failed: -6 00:27:40.841 the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.241138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with the state(6) to be set 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.841 [2024-11-20 06:39:00.241143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.241148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with starting I/O failed: -6 00:27:40.841 the state(6) to be set 00:27:40.841 [2024-11-20 06:39:00.241155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with the state(6) to be set 00:27:40.841 Write completed with error (sct=0, sc=8) 00:27:40.842 [2024-11-20 06:39:00.241163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.241168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.241173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a35d10 is same with the state(6) to be set 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 [2024-11-20 06:39:00.241397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with the state(6) to be set 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 [2024-11-20 06:39:00.241418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with the state(6) to be set 00:27:40.842 starting I/O failed: -6 00:27:40.842 [2024-11-20 06:39:00.241425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with the state(6) to be set 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 [2024-11-20 06:39:00.241433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with the state(6) to be set 00:27:40.842 starting I/O failed: -6 00:27:40.842 [2024-11-20 06:39:00.241440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with the state(6) to be set 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 [2024-11-20 06:39:00.241446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with the state(6) to be set 00:27:40.842 starting I/O failed: -6 00:27:40.842 [2024-11-20 06:39:00.241453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.241461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd5c0 is same with Write completed with error (sct=0, sc=8) 00:27:40.842 the state(6) to be set 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 [2024-11-20 06:39:00.241522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 [2024-11-20 06:39:00.242909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.842 NVMe io qpair process completion error 00:27:40.842 [2024-11-20 06:39:00.242969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.242986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.242991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.242996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.243001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.243009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.243014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 [2024-11-20 06:39:00.243018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd0d0 is same with the state(6) to be set 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 starting I/O failed: -6 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.842 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 [2024-11-20 06:39:00.244123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 [2024-11-20 06:39:00.244942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.843 starting I/O failed: -6 00:27:40.843 starting I/O failed: -6 00:27:40.843 starting I/O failed: -6 00:27:40.843 starting I/O failed: -6 00:27:40.843 starting I/O failed: -6 00:27:40.843 starting I/O failed: -6 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 [2024-11-20 06:39:00.246258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.843 starting I/O failed: -6 00:27:40.843 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 [2024-11-20 06:39:00.247891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.844 NVMe io qpair process completion error 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 [2024-11-20 06:39:00.249165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 [2024-11-20 06:39:00.249970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 starting I/O failed: -6 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.844 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 [2024-11-20 06:39:00.250886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.845 starting I/O failed: -6 00:27:40.845 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 [2024-11-20 06:39:00.253353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.846 NVMe io qpair process completion error 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 [2024-11-20 06:39:00.254558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 [2024-11-20 06:39:00.255365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.846 Write completed with error (sct=0, sc=8) 00:27:40.846 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 [2024-11-20 06:39:00.256285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 [2024-11-20 06:39:00.258368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.847 NVMe io qpair process completion error 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 [2024-11-20 06:39:00.259636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 starting I/O failed: -6 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.847 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 [2024-11-20 06:39:00.260468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 [2024-11-20 06:39:00.261408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.848 starting I/O failed: -6 00:27:40.848 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 [2024-11-20 06:39:00.262878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.849 NVMe io qpair process completion error 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 [2024-11-20 06:39:00.264057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.849 starting I/O failed: -6 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 [2024-11-20 06:39:00.265024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 starting I/O failed: -6 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.849 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 [2024-11-20 06:39:00.265935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 [2024-11-20 06:39:00.268399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.850 NVMe io qpair process completion error 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 starting I/O failed: -6 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.850 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 [2024-11-20 06:39:00.269819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 [2024-11-20 06:39:00.270634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 [2024-11-20 06:39:00.271549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.851 Write completed with error (sct=0, sc=8) 00:27:40.851 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 [2024-11-20 06:39:00.273163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.852 NVMe io qpair process completion error 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 [2024-11-20 06:39:00.274389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 [2024-11-20 06:39:00.275232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.852 starting I/O failed: -6 00:27:40.852 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 [2024-11-20 06:39:00.276179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 [2024-11-20 06:39:00.278022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.853 NVMe io qpair process completion error 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 starting I/O failed: -6 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.853 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 [2024-11-20 06:39:00.279237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 [2024-11-20 06:39:00.280050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 [2024-11-20 06:39:00.280990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.854 Write completed with error (sct=0, sc=8) 00:27:40.854 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 [2024-11-20 06:39:00.283076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.855 NVMe io qpair process completion error 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 [2024-11-20 06:39:00.284336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.855 Write completed with error (sct=0, sc=8) 00:27:40.855 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 [2024-11-20 06:39:00.285182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 [2024-11-20 06:39:00.286122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.856 Write completed with error (sct=0, sc=8) 00:27:40.856 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 Write completed with error (sct=0, sc=8) 00:27:40.857 starting I/O failed: -6 00:27:40.857 [2024-11-20 06:39:00.287774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:40.857 NVMe io qpair process completion error 00:27:40.857 Initializing NVMe Controllers 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:27:40.857 Controller IO queue size 128, less than required. 00:27:40.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:27:40.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:27:40.857 Initialization complete. Launching workers. 00:27:40.857 ======================================================== 00:27:40.857 Latency(us) 00:27:40.857 Device Information : IOPS MiB/s Average min max 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1929.76 82.92 66346.37 829.84 130793.39 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1937.99 83.27 66092.85 743.03 124847.26 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1876.77 80.64 67563.25 914.29 124103.13 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1887.75 81.11 67193.81 620.02 122705.58 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1884.37 80.97 67337.29 805.17 121839.68 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1880.78 80.81 67498.88 806.97 122083.43 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1892.39 81.31 67112.97 711.09 122824.25 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1911.39 82.13 66466.79 708.75 124341.35 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1891.76 81.29 67191.88 816.78 127000.60 00:27:40.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1785.57 76.72 71211.28 841.38 128947.06 00:27:40.857 ======================================================== 00:27:40.857 Total : 18878.51 811.19 67373.68 620.02 130793.39 00:27:40.857 00:27:40.857 [2024-11-20 06:39:00.290525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136b6b0 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136b050 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136a390 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136b380 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c360 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136a060 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136a6c0 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136a9f0 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136b9e0 is same with the state(6) to be set 00:27:40.857 [2024-11-20 06:39:00.290814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c540 is same with the state(6) to be set 00:27:40.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:40.857 06:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2784174 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2784174 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2784174 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.801 rmmod nvme_tcp 00:27:41.801 rmmod nvme_fabrics 00:27:41.801 rmmod nvme_keyring 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2783795 ']' 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2783795 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2783795 ']' 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2783795 00:27:41.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2783795) - No such process 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2783795 is not found' 00:27:41.801 Process with pid 2783795 is not found 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.801 06:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.347 00:27:44.347 real 0m10.343s 00:27:44.347 user 0m28.032s 00:27:44.347 sys 0m3.901s 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:44.347 ************************************ 00:27:44.347 END TEST nvmf_shutdown_tc4 00:27:44.347 ************************************ 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:27:44.347 00:27:44.347 real 0m43.617s 00:27:44.347 user 1m44.522s 00:27:44.347 sys 0m14.014s 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:44.347 ************************************ 00:27:44.347 END TEST nvmf_shutdown 00:27:44.347 ************************************ 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:44.347 ************************************ 00:27:44.347 START TEST nvmf_nsid 00:27:44.347 ************************************ 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:44.347 * Looking for test storage... 00:27:44.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:27:44.347 06:39:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:44.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.347 --rc genhtml_branch_coverage=1 00:27:44.347 --rc genhtml_function_coverage=1 00:27:44.347 --rc genhtml_legend=1 00:27:44.347 --rc geninfo_all_blocks=1 00:27:44.347 --rc geninfo_unexecuted_blocks=1 00:27:44.347 00:27:44.347 ' 00:27:44.347 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:44.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.347 --rc genhtml_branch_coverage=1 00:27:44.347 --rc genhtml_function_coverage=1 00:27:44.347 --rc genhtml_legend=1 00:27:44.347 --rc geninfo_all_blocks=1 00:27:44.347 --rc geninfo_unexecuted_blocks=1 00:27:44.347 00:27:44.348 ' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.348 --rc genhtml_branch_coverage=1 00:27:44.348 --rc genhtml_function_coverage=1 00:27:44.348 --rc genhtml_legend=1 00:27:44.348 --rc geninfo_all_blocks=1 00:27:44.348 --rc geninfo_unexecuted_blocks=1 00:27:44.348 00:27:44.348 ' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.348 --rc genhtml_branch_coverage=1 00:27:44.348 --rc genhtml_function_coverage=1 00:27:44.348 --rc genhtml_legend=1 00:27:44.348 --rc geninfo_all_blocks=1 00:27:44.348 --rc geninfo_unexecuted_blocks=1 00:27:44.348 00:27:44.348 ' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:44.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.348 06:39:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:52.490 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.490 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.490 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.490 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.490 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:52.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:52.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:52.491 Found net devices under 0000:31:00.0: cvl_0_0 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:52.491 Found net devices under 0000:31:00.1: cvl_0_1 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:27:52.491 00:27:52.491 --- 10.0.0.2 ping statistics --- 00:27:52.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.491 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:27:52.491 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:27:52.491 00:27:52.492 --- 10.0.0.1 ping statistics --- 00:27:52.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.492 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2790114 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2790114 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2790114 ']' 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:52.492 06:39:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:52.492 [2024-11-20 06:39:11.765557] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:52.492 [2024-11-20 06:39:11.765623] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.492 [2024-11-20 06:39:11.865393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.492 [2024-11-20 06:39:11.917059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.492 [2024-11-20 06:39:11.917111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.492 [2024-11-20 06:39:11.917120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.492 [2024-11-20 06:39:11.917128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.492 [2024-11-20 06:39:11.917134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.492 [2024-11-20 06:39:11.917970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2790474 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=268a5d0e-9a1c-43be-9df4-ec3c8f2b93c5 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=334f6474-1499-464a-a20a-83f4a1cf3ab2 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=051c99cd-d265-4faa-8981-b1f511f06f07 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.753 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:52.753 null0 00:27:53.014 null1 00:27:53.014 null2 00:27:53.014 [2024-11-20 06:39:12.685866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.014 [2024-11-20 06:39:12.686317] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:53.014 [2024-11-20 06:39:12.686388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790474 ] 00:27:53.014 [2024-11-20 06:39:12.710127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2790474 /var/tmp/tgt2.sock 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2790474 ']' 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:27:53.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:53.014 06:39:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:53.014 [2024-11-20 06:39:12.781760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.014 [2024-11-20 06:39:12.834075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.276 06:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:53.276 06:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:27:53.276 06:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:27:53.537 [2024-11-20 06:39:13.394124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.537 [2024-11-20 06:39:13.410315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:27:53.537 nvme0n1 nvme0n2 00:27:53.537 nvme1n1 00:27:53.798 06:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:27:53.798 06:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:27:53.798 06:39:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:27:55.184 06:39:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 268a5d0e-9a1c-43be-9df4-ec3c8f2b93c5 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=268a5d0e9a1c43be9df4ec3c8f2b93c5 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 268A5D0E9A1C43BE9DF4EC3C8F2B93C5 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 268A5D0E9A1C43BE9DF4EC3C8F2B93C5 == \2\6\8\A\5\D\0\E\9\A\1\C\4\3\B\E\9\D\F\4\E\C\3\C\8\F\2\B\9\3\C\5 ]] 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:56.128 06:39:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 334f6474-1499-464a-a20a-83f4a1cf3ab2 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:27:56.128 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=334f64741499464aa20a83f4a1cf3ab2 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 334F64741499464AA20A83F4A1CF3AB2 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 334F64741499464AA20A83F4A1CF3AB2 == \3\3\4\F\6\4\7\4\1\4\9\9\4\6\4\A\A\2\0\A\8\3\F\4\A\1\C\F\3\A\B\2 ]] 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 051c99cd-d265-4faa-8981-b1f511f06f07 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:27:56.388 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:56.389 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=051c99cdd2654faa8981b1f511f06f07 00:27:56.389 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 051C99CDD2654FAA8981B1F511F06F07 00:27:56.389 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 051C99CDD2654FAA8981B1F511F06F07 == \0\5\1\C\9\9\C\D\D\2\6\5\4\F\A\A\8\9\8\1\B\1\F\5\1\1\F\0\6\F\0\7 ]] 00:27:56.389 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:27:56.649 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:56.649 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:27:56.649 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2790474 00:27:56.649 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2790474 ']' 00:27:56.649 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2790474 00:27:56.649 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:27:56.649 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:56.650 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2790474 00:27:56.650 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:56.650 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:56.650 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2790474' 00:27:56.650 killing process with pid 2790474 00:27:56.650 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2790474 00:27:56.650 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2790474 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.911 rmmod nvme_tcp 00:27:56.911 rmmod nvme_fabrics 00:27:56.911 rmmod nvme_keyring 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2790114 ']' 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2790114 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2790114 ']' 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2790114 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2790114 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2790114' 00:27:56.911 killing process with pid 2790114 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2790114 00:27:56.911 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2790114 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.173 06:39:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.084 06:39:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:59.084 00:27:59.084 real 0m15.126s 00:27:59.084 user 0m11.488s 00:27:59.084 sys 0m6.988s 00:27:59.084 06:39:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:59.084 06:39:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:59.084 ************************************ 00:27:59.084 END TEST nvmf_nsid 00:27:59.084 ************************************ 00:27:59.084 06:39:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:59.084 00:27:59.084 real 13m4.937s 00:27:59.084 user 27m13.818s 00:27:59.084 sys 3m57.512s 00:27:59.084 06:39:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:59.084 06:39:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:59.084 ************************************ 00:27:59.084 END TEST nvmf_target_extra 00:27:59.084 ************************************ 00:27:59.346 06:39:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:59.346 06:39:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:59.346 06:39:19 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:59.346 06:39:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:59.346 ************************************ 00:27:59.346 START TEST nvmf_host 00:27:59.346 ************************************ 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:59.346 * Looking for test storage... 00:27:59.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.346 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.608 --rc genhtml_branch_coverage=1 00:27:59.608 --rc genhtml_function_coverage=1 00:27:59.608 --rc genhtml_legend=1 00:27:59.608 --rc geninfo_all_blocks=1 00:27:59.608 --rc geninfo_unexecuted_blocks=1 00:27:59.608 00:27:59.608 ' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.608 --rc genhtml_branch_coverage=1 00:27:59.608 --rc genhtml_function_coverage=1 00:27:59.608 --rc genhtml_legend=1 00:27:59.608 --rc geninfo_all_blocks=1 00:27:59.608 --rc geninfo_unexecuted_blocks=1 00:27:59.608 00:27:59.608 ' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.608 --rc genhtml_branch_coverage=1 00:27:59.608 --rc genhtml_function_coverage=1 00:27:59.608 --rc genhtml_legend=1 00:27:59.608 --rc geninfo_all_blocks=1 00:27:59.608 --rc geninfo_unexecuted_blocks=1 00:27:59.608 00:27:59.608 ' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.608 --rc genhtml_branch_coverage=1 00:27:59.608 --rc genhtml_function_coverage=1 00:27:59.608 --rc genhtml_legend=1 00:27:59.608 --rc geninfo_all_blocks=1 00:27:59.608 --rc geninfo_unexecuted_blocks=1 00:27:59.608 00:27:59.608 ' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:59.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.608 ************************************ 00:27:59.608 START TEST nvmf_multicontroller 00:27:59.608 ************************************ 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:59.608 * Looking for test storage... 00:27:59.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:27:59.608 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.871 --rc genhtml_branch_coverage=1 00:27:59.871 --rc genhtml_function_coverage=1 00:27:59.871 --rc genhtml_legend=1 00:27:59.871 --rc geninfo_all_blocks=1 00:27:59.871 --rc geninfo_unexecuted_blocks=1 00:27:59.871 00:27:59.871 ' 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.871 --rc genhtml_branch_coverage=1 00:27:59.871 --rc genhtml_function_coverage=1 00:27:59.871 --rc genhtml_legend=1 00:27:59.871 --rc geninfo_all_blocks=1 00:27:59.871 --rc geninfo_unexecuted_blocks=1 00:27:59.871 00:27:59.871 ' 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.871 --rc genhtml_branch_coverage=1 00:27:59.871 --rc genhtml_function_coverage=1 00:27:59.871 --rc genhtml_legend=1 00:27:59.871 --rc geninfo_all_blocks=1 00:27:59.871 --rc geninfo_unexecuted_blocks=1 00:27:59.871 00:27:59.871 ' 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:59.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.871 --rc genhtml_branch_coverage=1 00:27:59.871 --rc genhtml_function_coverage=1 00:27:59.871 --rc genhtml_legend=1 00:27:59.871 --rc geninfo_all_blocks=1 00:27:59.871 --rc geninfo_unexecuted_blocks=1 00:27:59.871 00:27:59.871 ' 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.871 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:59.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:59.872 06:39:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:08.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:08.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:08.036 Found net devices under 0000:31:00.0: cvl_0_0 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:08.036 Found net devices under 0000:31:00.1: cvl_0_1 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.036 06:39:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.036 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.036 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.036 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:08.036 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.036 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.036 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:08.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:28:08.037 00:28:08.037 --- 10.0.0.2 ping statistics --- 00:28:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.037 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:28:08.037 00:28:08.037 --- 10.0.0.1 ping statistics --- 00:28:08.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.037 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2795607 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2795607 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2795607 ']' 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:08.037 06:39:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.037 [2024-11-20 06:39:27.317347] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:08.037 [2024-11-20 06:39:27.317413] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.037 [2024-11-20 06:39:27.418304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:08.037 [2024-11-20 06:39:27.470404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.037 [2024-11-20 06:39:27.470461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.037 [2024-11-20 06:39:27.470470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.037 [2024-11-20 06:39:27.470478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.037 [2024-11-20 06:39:27.470484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.037 [2024-11-20 06:39:27.472632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.037 [2024-11-20 06:39:27.472810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.037 [2024-11-20 06:39:27.472810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.298 [2024-11-20 06:39:28.201466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.298 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 Malloc0 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 [2024-11-20 06:39:28.279802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 [2024-11-20 06:39:28.291665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 Malloc1 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2795754 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2795754 /var/tmp/bdevperf.sock 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2795754 ']' 00:28:08.559 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.560 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:08.560 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.560 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:08.560 06:39:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.502 NVMe0n1 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.502 1 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.502 request: 00:28:09.502 { 00:28:09.502 "name": "NVMe0", 00:28:09.502 "trtype": "tcp", 00:28:09.502 "traddr": "10.0.0.2", 00:28:09.502 "adrfam": "ipv4", 00:28:09.502 "trsvcid": "4420", 00:28:09.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.502 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:09.502 "hostaddr": "10.0.0.1", 00:28:09.502 "prchk_reftag": false, 00:28:09.502 "prchk_guard": false, 00:28:09.502 "hdgst": false, 00:28:09.502 "ddgst": false, 00:28:09.502 "allow_unrecognized_csi": false, 00:28:09.502 "method": "bdev_nvme_attach_controller", 00:28:09.502 "req_id": 1 00:28:09.502 } 00:28:09.502 Got JSON-RPC error response 00:28:09.502 response: 00:28:09.502 { 00:28:09.502 "code": -114, 00:28:09.502 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:09.502 } 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.502 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.763 request: 00:28:09.763 { 00:28:09.763 "name": "NVMe0", 00:28:09.763 "trtype": "tcp", 00:28:09.763 "traddr": "10.0.0.2", 00:28:09.763 "adrfam": "ipv4", 00:28:09.763 "trsvcid": "4420", 00:28:09.763 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:09.763 "hostaddr": "10.0.0.1", 00:28:09.763 "prchk_reftag": false, 00:28:09.763 "prchk_guard": false, 00:28:09.763 "hdgst": false, 00:28:09.763 "ddgst": false, 00:28:09.763 "allow_unrecognized_csi": false, 00:28:09.763 "method": "bdev_nvme_attach_controller", 00:28:09.763 "req_id": 1 00:28:09.763 } 00:28:09.763 Got JSON-RPC error response 00:28:09.763 response: 00:28:09.763 { 00:28:09.763 "code": -114, 00:28:09.763 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:09.763 } 00:28:09.763 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 request: 00:28:09.764 { 00:28:09.764 "name": "NVMe0", 00:28:09.764 "trtype": "tcp", 00:28:09.764 "traddr": "10.0.0.2", 00:28:09.764 "adrfam": "ipv4", 00:28:09.764 "trsvcid": "4420", 00:28:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.764 "hostaddr": "10.0.0.1", 00:28:09.764 "prchk_reftag": false, 00:28:09.764 "prchk_guard": false, 00:28:09.764 "hdgst": false, 00:28:09.764 "ddgst": false, 00:28:09.764 "multipath": "disable", 00:28:09.764 "allow_unrecognized_csi": false, 00:28:09.764 "method": "bdev_nvme_attach_controller", 00:28:09.764 "req_id": 1 00:28:09.764 } 00:28:09.764 Got JSON-RPC error response 00:28:09.764 response: 00:28:09.764 { 00:28:09.764 "code": -114, 00:28:09.764 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:09.764 } 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 request: 00:28:09.764 { 00:28:09.764 "name": "NVMe0", 00:28:09.764 "trtype": "tcp", 00:28:09.764 "traddr": "10.0.0.2", 00:28:09.764 "adrfam": "ipv4", 00:28:09.764 "trsvcid": "4420", 00:28:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:09.764 "hostaddr": "10.0.0.1", 00:28:09.764 "prchk_reftag": false, 00:28:09.764 "prchk_guard": false, 00:28:09.764 "hdgst": false, 00:28:09.764 "ddgst": false, 00:28:09.764 "multipath": "failover", 00:28:09.764 "allow_unrecognized_csi": false, 00:28:09.764 "method": "bdev_nvme_attach_controller", 00:28:09.764 "req_id": 1 00:28:09.764 } 00:28:09.764 Got JSON-RPC error response 00:28:09.764 response: 00:28:09.764 { 00:28:09.764 "code": -114, 00:28:09.764 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:09.764 } 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 NVMe0n1 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.764 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.026 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:10.026 06:39:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:10.970 { 00:28:10.970 "results": [ 00:28:10.970 { 00:28:10.970 "job": "NVMe0n1", 00:28:10.970 "core_mask": "0x1", 00:28:10.970 "workload": "write", 00:28:10.970 "status": "finished", 00:28:10.970 "queue_depth": 128, 00:28:10.970 "io_size": 4096, 00:28:10.970 "runtime": 1.005611, 00:28:10.970 "iops": 24289.710434750614, 00:28:10.970 "mibps": 94.88168138574459, 00:28:10.970 "io_failed": 0, 00:28:10.970 "io_timeout": 0, 00:28:10.970 "avg_latency_us": 5258.327338628237, 00:28:10.970 "min_latency_us": 3072.0, 00:28:10.970 "max_latency_us": 12451.84 00:28:10.971 } 00:28:10.971 ], 00:28:10.971 "core_count": 1 00:28:10.971 } 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2795754 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2795754 ']' 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2795754 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2795754 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2795754' 00:28:11.232 killing process with pid 2795754 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2795754 00:28:11.232 06:39:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2795754 00:28:11.232 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.232 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:28:11.233 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:28:11.233 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:11.233 [2024-11-20 06:39:28.420449] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:11.233 [2024-11-20 06:39:28.420528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795754 ] 00:28:11.233 [2024-11-20 06:39:28.514378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.233 [2024-11-20 06:39:28.567653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.233 [2024-11-20 06:39:29.750065] bdev.c:4746:bdev_name_add: *ERROR*: Bdev name 2eaac1b4-fc0e-4755-91d2-55f1c9ed0284 already exists 00:28:11.233 [2024-11-20 06:39:29.750111] bdev.c:7955:bdev_register: *ERROR*: Unable to add uuid:2eaac1b4-fc0e-4755-91d2-55f1c9ed0284 alias for bdev NVMe1n1 00:28:11.233 [2024-11-20 06:39:29.750121] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:11.233 Running I/O for 1 seconds... 00:28:11.233 24233.00 IOPS, 94.66 MiB/s 00:28:11.233 Latency(us) 00:28:11.233 [2024-11-20T05:39:31.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.233 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:11.233 NVMe0n1 : 1.01 24289.71 94.88 0.00 0.00 5258.33 3072.00 12451.84 00:28:11.233 [2024-11-20T05:39:31.153Z] =================================================================================================================== 00:28:11.233 [2024-11-20T05:39:31.153Z] Total : 24289.71 94.88 0.00 0.00 5258.33 3072.00 12451.84 00:28:11.233 Received shutdown signal, test time was about 1.000000 seconds 00:28:11.233 00:28:11.233 Latency(us) 00:28:11.233 [2024-11-20T05:39:31.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.233 [2024-11-20T05:39:31.153Z] =================================================================================================================== 00:28:11.233 [2024-11-20T05:39:31.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.233 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.493 rmmod nvme_tcp 00:28:11.493 rmmod nvme_fabrics 00:28:11.493 rmmod nvme_keyring 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2795607 ']' 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2795607 00:28:11.493 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2795607 ']' 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2795607 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2795607 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2795607' 00:28:11.494 killing process with pid 2795607 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2795607 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2795607 00:28:11.494 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.755 06:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.668 00:28:13.668 real 0m14.152s 00:28:13.668 user 0m17.134s 00:28:13.668 sys 0m6.590s 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:13.668 ************************************ 00:28:13.668 END TEST nvmf_multicontroller 00:28:13.668 ************************************ 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.668 ************************************ 00:28:13.668 START TEST nvmf_aer 00:28:13.668 ************************************ 00:28:13.668 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:13.928 * Looking for test storage... 00:28:13.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:13.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.928 --rc genhtml_branch_coverage=1 00:28:13.928 --rc genhtml_function_coverage=1 00:28:13.928 --rc genhtml_legend=1 00:28:13.928 --rc geninfo_all_blocks=1 00:28:13.928 --rc geninfo_unexecuted_blocks=1 00:28:13.928 00:28:13.928 ' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:13.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.928 --rc genhtml_branch_coverage=1 00:28:13.928 --rc genhtml_function_coverage=1 00:28:13.928 --rc genhtml_legend=1 00:28:13.928 --rc geninfo_all_blocks=1 00:28:13.928 --rc geninfo_unexecuted_blocks=1 00:28:13.928 00:28:13.928 ' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:13.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.928 --rc genhtml_branch_coverage=1 00:28:13.928 --rc genhtml_function_coverage=1 00:28:13.928 --rc genhtml_legend=1 00:28:13.928 --rc geninfo_all_blocks=1 00:28:13.928 --rc geninfo_unexecuted_blocks=1 00:28:13.928 00:28:13.928 ' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:13.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.928 --rc genhtml_branch_coverage=1 00:28:13.928 --rc genhtml_function_coverage=1 00:28:13.928 --rc genhtml_legend=1 00:28:13.928 --rc geninfo_all_blocks=1 00:28:13.928 --rc geninfo_unexecuted_blocks=1 00:28:13.928 00:28:13.928 ' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:13.928 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.929 06:39:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.063 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.063 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:22.063 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:22.063 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:22.063 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:22.063 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:22.063 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:22.064 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:22.064 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:22.064 Found net devices under 0000:31:00.0: cvl_0_0 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:22.064 Found net devices under 0000:31:00.1: cvl_0_1 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:22.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:28:22.064 00:28:22.064 --- 10.0.0.2 ping statistics --- 00:28:22.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.064 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:28:22.064 00:28:22.064 --- 10.0.0.1 ping statistics --- 00:28:22.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.064 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2800656 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2800656 00:28:22.064 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:22.065 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2800656 ']' 00:28:22.065 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.065 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:22.065 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.065 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:22.065 06:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.065 [2024-11-20 06:39:41.506100] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:22.065 [2024-11-20 06:39:41.506166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.065 [2024-11-20 06:39:41.607023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:22.065 [2024-11-20 06:39:41.660493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.065 [2024-11-20 06:39:41.660545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.065 [2024-11-20 06:39:41.660554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.065 [2024-11-20 06:39:41.660562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.065 [2024-11-20 06:39:41.660568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.065 [2024-11-20 06:39:41.662682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.065 [2024-11-20 06:39:41.662843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.065 [2024-11-20 06:39:41.662902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:22.065 [2024-11-20 06:39:41.662905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.713 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:22.713 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:28:22.713 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.713 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:22.713 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 [2024-11-20 06:39:42.385004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 Malloc0 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 [2024-11-20 06:39:42.459514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.714 [ 00:28:22.714 { 00:28:22.714 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:22.714 "subtype": "Discovery", 00:28:22.714 "listen_addresses": [], 00:28:22.714 "allow_any_host": true, 00:28:22.714 "hosts": [] 00:28:22.714 }, 00:28:22.714 { 00:28:22.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.714 "subtype": "NVMe", 00:28:22.714 "listen_addresses": [ 00:28:22.714 { 00:28:22.714 "trtype": "TCP", 00:28:22.714 "adrfam": "IPv4", 00:28:22.714 "traddr": "10.0.0.2", 00:28:22.714 "trsvcid": "4420" 00:28:22.714 } 00:28:22.714 ], 00:28:22.714 "allow_any_host": true, 00:28:22.714 "hosts": [], 00:28:22.714 "serial_number": "SPDK00000000000001", 00:28:22.714 "model_number": "SPDK bdev Controller", 00:28:22.714 "max_namespaces": 2, 00:28:22.714 "min_cntlid": 1, 00:28:22.714 "max_cntlid": 65519, 00:28:22.714 "namespaces": [ 00:28:22.714 { 00:28:22.714 "nsid": 1, 00:28:22.714 "bdev_name": "Malloc0", 00:28:22.714 "name": "Malloc0", 00:28:22.714 "nguid": "85BB3D683F354F10B946168BBAC9E25A", 00:28:22.714 "uuid": "85bb3d68-3f35-4f10-b946-168bbac9e25a" 00:28:22.714 } 00:28:22.714 ] 00:28:22.714 } 00:28:22.714 ] 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2800715 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:28:22.714 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:23.016 Malloc1 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:23.016 Asynchronous Event Request test 00:28:23.016 Attaching to 10.0.0.2 00:28:23.016 Attached to 10.0.0.2 00:28:23.016 Registering asynchronous event callbacks... 00:28:23.016 Starting namespace attribute notice tests for all controllers... 00:28:23.016 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:23.016 aer_cb - Changed Namespace 00:28:23.016 Cleaning up... 00:28:23.016 [ 00:28:23.016 { 00:28:23.016 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:23.016 "subtype": "Discovery", 00:28:23.016 "listen_addresses": [], 00:28:23.016 "allow_any_host": true, 00:28:23.016 "hosts": [] 00:28:23.016 }, 00:28:23.016 { 00:28:23.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.016 "subtype": "NVMe", 00:28:23.016 "listen_addresses": [ 00:28:23.016 { 00:28:23.016 "trtype": "TCP", 00:28:23.016 "adrfam": "IPv4", 00:28:23.016 "traddr": "10.0.0.2", 00:28:23.016 "trsvcid": "4420" 00:28:23.016 } 00:28:23.016 ], 00:28:23.016 "allow_any_host": true, 00:28:23.016 "hosts": [], 00:28:23.016 "serial_number": "SPDK00000000000001", 00:28:23.016 "model_number": "SPDK bdev Controller", 00:28:23.016 "max_namespaces": 2, 00:28:23.016 "min_cntlid": 1, 00:28:23.016 "max_cntlid": 65519, 00:28:23.016 "namespaces": [ 00:28:23.016 { 00:28:23.016 "nsid": 1, 00:28:23.016 "bdev_name": "Malloc0", 00:28:23.016 "name": "Malloc0", 00:28:23.016 "nguid": "85BB3D683F354F10B946168BBAC9E25A", 00:28:23.016 "uuid": "85bb3d68-3f35-4f10-b946-168bbac9e25a" 00:28:23.016 }, 00:28:23.016 { 00:28:23.016 "nsid": 2, 00:28:23.016 "bdev_name": "Malloc1", 00:28:23.016 "name": "Malloc1", 00:28:23.016 "nguid": "D56F6466A4344FD7A17587AD567B44C4", 00:28:23.016 "uuid": "d56f6466-a434-4fd7-a175-87ad567b44c4" 00:28:23.016 } 00:28:23.016 ] 00:28:23.016 } 00:28:23.016 ] 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2800715 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.016 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.278 06:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.278 rmmod nvme_tcp 00:28:23.278 rmmod nvme_fabrics 00:28:23.278 rmmod nvme_keyring 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2800656 ']' 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2800656 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2800656 ']' 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2800656 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2800656 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2800656' 00:28:23.278 killing process with pid 2800656 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2800656 00:28:23.278 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2800656 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.540 06:39:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.455 06:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.455 00:28:25.455 real 0m11.775s 00:28:25.455 user 0m8.623s 00:28:25.455 sys 0m6.313s 00:28:25.455 06:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:25.455 06:39:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:25.455 ************************************ 00:28:25.455 END TEST nvmf_aer 00:28:25.455 ************************************ 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.716 ************************************ 00:28:25.716 START TEST nvmf_async_init 00:28:25.716 ************************************ 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:25.716 * Looking for test storage... 00:28:25.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.716 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.979 --rc genhtml_branch_coverage=1 00:28:25.979 --rc genhtml_function_coverage=1 00:28:25.979 --rc genhtml_legend=1 00:28:25.979 --rc geninfo_all_blocks=1 00:28:25.979 --rc geninfo_unexecuted_blocks=1 00:28:25.979 00:28:25.979 ' 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.979 --rc genhtml_branch_coverage=1 00:28:25.979 --rc genhtml_function_coverage=1 00:28:25.979 --rc genhtml_legend=1 00:28:25.979 --rc geninfo_all_blocks=1 00:28:25.979 --rc geninfo_unexecuted_blocks=1 00:28:25.979 00:28:25.979 ' 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.979 --rc genhtml_branch_coverage=1 00:28:25.979 --rc genhtml_function_coverage=1 00:28:25.979 --rc genhtml_legend=1 00:28:25.979 --rc geninfo_all_blocks=1 00:28:25.979 --rc geninfo_unexecuted_blocks=1 00:28:25.979 00:28:25.979 ' 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.979 --rc genhtml_branch_coverage=1 00:28:25.979 --rc genhtml_function_coverage=1 00:28:25.979 --rc genhtml_legend=1 00:28:25.979 --rc geninfo_all_blocks=1 00:28:25.979 --rc geninfo_unexecuted_blocks=1 00:28:25.979 00:28:25.979 ' 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.979 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3e9e56cd601e42d98aa79593a504c799 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.980 06:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.122 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:34.123 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:34.123 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:34.123 Found net devices under 0000:31:00.0: cvl_0_0 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:34.123 Found net devices under 0000:31:00.1: cvl_0_1 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.123 06:39:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:28:34.123 00:28:34.123 --- 10.0.0.2 ping statistics --- 00:28:34.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.123 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:28:34.123 00:28:34.123 --- 10.0.0.1 ping statistics --- 00:28:34.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.123 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2805085 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2805085 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2805085 ']' 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:34.123 06:39:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.123 [2024-11-20 06:39:53.339888] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:34.123 [2024-11-20 06:39:53.339954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.123 [2024-11-20 06:39:53.439178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.123 [2024-11-20 06:39:53.490219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.123 [2024-11-20 06:39:53.490265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.124 [2024-11-20 06:39:53.490274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.124 [2024-11-20 06:39:53.490281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.124 [2024-11-20 06:39:53.490287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.124 [2024-11-20 06:39:53.491127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.385 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:34.385 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:28:34.385 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.385 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.385 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.386 [2024-11-20 06:39:54.198574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.386 null0 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3e9e56cd601e42d98aa79593a504c799 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.386 [2024-11-20 06:39:54.258950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.386 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.647 nvme0n1 00:28:34.647 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.647 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:34.647 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.647 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.647 [ 00:28:34.647 { 00:28:34.647 "name": "nvme0n1", 00:28:34.647 "aliases": [ 00:28:34.647 "3e9e56cd-601e-42d9-8aa7-9593a504c799" 00:28:34.647 ], 00:28:34.647 "product_name": "NVMe disk", 00:28:34.647 "block_size": 512, 00:28:34.647 "num_blocks": 2097152, 00:28:34.647 "uuid": "3e9e56cd-601e-42d9-8aa7-9593a504c799", 00:28:34.647 "numa_id": 0, 00:28:34.647 "assigned_rate_limits": { 00:28:34.647 "rw_ios_per_sec": 0, 00:28:34.647 "rw_mbytes_per_sec": 0, 00:28:34.647 "r_mbytes_per_sec": 0, 00:28:34.647 "w_mbytes_per_sec": 0 00:28:34.647 }, 00:28:34.647 "claimed": false, 00:28:34.647 "zoned": false, 00:28:34.647 "supported_io_types": { 00:28:34.647 "read": true, 00:28:34.647 "write": true, 00:28:34.647 "unmap": false, 00:28:34.647 "flush": true, 00:28:34.647 "reset": true, 00:28:34.647 "nvme_admin": true, 00:28:34.647 "nvme_io": true, 00:28:34.647 "nvme_io_md": false, 00:28:34.647 "write_zeroes": true, 00:28:34.647 "zcopy": false, 00:28:34.647 "get_zone_info": false, 00:28:34.647 "zone_management": false, 00:28:34.647 "zone_append": false, 00:28:34.647 "compare": true, 00:28:34.647 "compare_and_write": true, 00:28:34.647 "abort": true, 00:28:34.647 "seek_hole": false, 00:28:34.647 "seek_data": false, 00:28:34.647 "copy": true, 00:28:34.647 "nvme_iov_md": false 00:28:34.647 }, 00:28:34.647 "memory_domains": [ 00:28:34.647 { 00:28:34.647 "dma_device_id": "system", 00:28:34.647 "dma_device_type": 1 00:28:34.647 } 00:28:34.647 ], 00:28:34.647 "driver_specific": { 00:28:34.647 "nvme": [ 00:28:34.647 { 00:28:34.647 "trid": { 00:28:34.647 "trtype": "TCP", 00:28:34.647 "adrfam": "IPv4", 00:28:34.647 "traddr": "10.0.0.2", 00:28:34.647 "trsvcid": "4420", 00:28:34.647 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:34.647 }, 00:28:34.647 "ctrlr_data": { 00:28:34.647 "cntlid": 1, 00:28:34.647 "vendor_id": "0x8086", 00:28:34.647 "model_number": "SPDK bdev Controller", 00:28:34.647 "serial_number": "00000000000000000000", 00:28:34.648 "firmware_revision": "25.01", 00:28:34.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.648 "oacs": { 00:28:34.648 "security": 0, 00:28:34.648 "format": 0, 00:28:34.648 "firmware": 0, 00:28:34.648 "ns_manage": 0 00:28:34.648 }, 00:28:34.648 "multi_ctrlr": true, 00:28:34.648 "ana_reporting": false 00:28:34.648 }, 00:28:34.648 "vs": { 00:28:34.648 "nvme_version": "1.3" 00:28:34.648 }, 00:28:34.648 "ns_data": { 00:28:34.648 "id": 1, 00:28:34.648 "can_share": true 00:28:34.648 } 00:28:34.648 } 00:28:34.648 ], 00:28:34.648 "mp_policy": "active_passive" 00:28:34.648 } 00:28:34.648 } 00:28:34.648 ] 00:28:34.648 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.648 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:34.648 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.648 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.648 [2024-11-20 06:39:54.533584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:34.648 [2024-11-20 06:39:54.533669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcac4a0 (9): Bad file descriptor 00:28:34.909 [2024-11-20 06:39:54.665851] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.909 [ 00:28:34.909 { 00:28:34.909 "name": "nvme0n1", 00:28:34.909 "aliases": [ 00:28:34.909 "3e9e56cd-601e-42d9-8aa7-9593a504c799" 00:28:34.909 ], 00:28:34.909 "product_name": "NVMe disk", 00:28:34.909 "block_size": 512, 00:28:34.909 "num_blocks": 2097152, 00:28:34.909 "uuid": "3e9e56cd-601e-42d9-8aa7-9593a504c799", 00:28:34.909 "numa_id": 0, 00:28:34.909 "assigned_rate_limits": { 00:28:34.909 "rw_ios_per_sec": 0, 00:28:34.909 "rw_mbytes_per_sec": 0, 00:28:34.909 "r_mbytes_per_sec": 0, 00:28:34.909 "w_mbytes_per_sec": 0 00:28:34.909 }, 00:28:34.909 "claimed": false, 00:28:34.909 "zoned": false, 00:28:34.909 "supported_io_types": { 00:28:34.909 "read": true, 00:28:34.909 "write": true, 00:28:34.909 "unmap": false, 00:28:34.909 "flush": true, 00:28:34.909 "reset": true, 00:28:34.909 "nvme_admin": true, 00:28:34.909 "nvme_io": true, 00:28:34.909 "nvme_io_md": false, 00:28:34.909 "write_zeroes": true, 00:28:34.909 "zcopy": false, 00:28:34.909 "get_zone_info": false, 00:28:34.909 "zone_management": false, 00:28:34.909 "zone_append": false, 00:28:34.909 "compare": true, 00:28:34.909 "compare_and_write": true, 00:28:34.909 "abort": true, 00:28:34.909 "seek_hole": false, 00:28:34.909 "seek_data": false, 00:28:34.909 "copy": true, 00:28:34.909 "nvme_iov_md": false 00:28:34.909 }, 00:28:34.909 "memory_domains": [ 00:28:34.909 { 00:28:34.909 "dma_device_id": "system", 00:28:34.909 "dma_device_type": 1 00:28:34.909 } 00:28:34.909 ], 00:28:34.909 "driver_specific": { 00:28:34.909 "nvme": [ 00:28:34.909 { 00:28:34.909 "trid": { 00:28:34.909 "trtype": "TCP", 00:28:34.909 "adrfam": "IPv4", 00:28:34.909 "traddr": "10.0.0.2", 00:28:34.909 "trsvcid": "4420", 00:28:34.909 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:34.909 }, 00:28:34.909 "ctrlr_data": { 00:28:34.909 "cntlid": 2, 00:28:34.909 "vendor_id": "0x8086", 00:28:34.909 "model_number": "SPDK bdev Controller", 00:28:34.909 "serial_number": "00000000000000000000", 00:28:34.909 "firmware_revision": "25.01", 00:28:34.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.909 "oacs": { 00:28:34.909 "security": 0, 00:28:34.909 "format": 0, 00:28:34.909 "firmware": 0, 00:28:34.909 "ns_manage": 0 00:28:34.909 }, 00:28:34.909 "multi_ctrlr": true, 00:28:34.909 "ana_reporting": false 00:28:34.909 }, 00:28:34.909 "vs": { 00:28:34.909 "nvme_version": "1.3" 00:28:34.909 }, 00:28:34.909 "ns_data": { 00:28:34.909 "id": 1, 00:28:34.909 "can_share": true 00:28:34.909 } 00:28:34.909 } 00:28:34.909 ], 00:28:34.909 "mp_policy": "active_passive" 00:28:34.909 } 00:28:34.909 } 00:28:34.909 ] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.nk1FZwmuQq 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.nk1FZwmuQq 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.nk1FZwmuQq 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.909 [2024-11-20 06:39:54.754270] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:34.909 [2024-11-20 06:39:54.754428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.909 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:34.909 [2024-11-20 06:39:54.778348] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:35.170 nvme0n1 00:28:35.170 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.170 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.171 [ 00:28:35.171 { 00:28:35.171 "name": "nvme0n1", 00:28:35.171 "aliases": [ 00:28:35.171 "3e9e56cd-601e-42d9-8aa7-9593a504c799" 00:28:35.171 ], 00:28:35.171 "product_name": "NVMe disk", 00:28:35.171 "block_size": 512, 00:28:35.171 "num_blocks": 2097152, 00:28:35.171 "uuid": "3e9e56cd-601e-42d9-8aa7-9593a504c799", 00:28:35.171 "numa_id": 0, 00:28:35.171 "assigned_rate_limits": { 00:28:35.171 "rw_ios_per_sec": 0, 00:28:35.171 "rw_mbytes_per_sec": 0, 00:28:35.171 "r_mbytes_per_sec": 0, 00:28:35.171 "w_mbytes_per_sec": 0 00:28:35.171 }, 00:28:35.171 "claimed": false, 00:28:35.171 "zoned": false, 00:28:35.171 "supported_io_types": { 00:28:35.171 "read": true, 00:28:35.171 "write": true, 00:28:35.171 "unmap": false, 00:28:35.171 "flush": true, 00:28:35.171 "reset": true, 00:28:35.171 "nvme_admin": true, 00:28:35.171 "nvme_io": true, 00:28:35.171 "nvme_io_md": false, 00:28:35.171 "write_zeroes": true, 00:28:35.171 "zcopy": false, 00:28:35.171 "get_zone_info": false, 00:28:35.171 "zone_management": false, 00:28:35.171 "zone_append": false, 00:28:35.171 "compare": true, 00:28:35.171 "compare_and_write": true, 00:28:35.171 "abort": true, 00:28:35.171 "seek_hole": false, 00:28:35.171 "seek_data": false, 00:28:35.171 "copy": true, 00:28:35.171 "nvme_iov_md": false 00:28:35.171 }, 00:28:35.171 "memory_domains": [ 00:28:35.171 { 00:28:35.171 "dma_device_id": "system", 00:28:35.171 "dma_device_type": 1 00:28:35.171 } 00:28:35.171 ], 00:28:35.171 "driver_specific": { 00:28:35.171 "nvme": [ 00:28:35.171 { 00:28:35.171 "trid": { 00:28:35.171 "trtype": "TCP", 00:28:35.171 "adrfam": "IPv4", 00:28:35.171 "traddr": "10.0.0.2", 00:28:35.171 "trsvcid": "4421", 00:28:35.171 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:35.171 }, 00:28:35.171 "ctrlr_data": { 00:28:35.171 "cntlid": 3, 00:28:35.171 "vendor_id": "0x8086", 00:28:35.171 "model_number": "SPDK bdev Controller", 00:28:35.171 "serial_number": "00000000000000000000", 00:28:35.171 "firmware_revision": "25.01", 00:28:35.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.171 "oacs": { 00:28:35.171 "security": 0, 00:28:35.171 "format": 0, 00:28:35.171 "firmware": 0, 00:28:35.171 "ns_manage": 0 00:28:35.171 }, 00:28:35.171 "multi_ctrlr": true, 00:28:35.171 "ana_reporting": false 00:28:35.171 }, 00:28:35.171 "vs": { 00:28:35.171 "nvme_version": "1.3" 00:28:35.171 }, 00:28:35.171 "ns_data": { 00:28:35.171 "id": 1, 00:28:35.171 "can_share": true 00:28:35.171 } 00:28:35.171 } 00:28:35.171 ], 00:28:35.171 "mp_policy": "active_passive" 00:28:35.171 } 00:28:35.171 } 00:28:35.171 ] 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.nk1FZwmuQq 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.171 rmmod nvme_tcp 00:28:35.171 rmmod nvme_fabrics 00:28:35.171 rmmod nvme_keyring 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2805085 ']' 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2805085 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2805085 ']' 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2805085 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:35.171 06:39:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2805085 00:28:35.171 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:35.171 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:35.171 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2805085' 00:28:35.171 killing process with pid 2805085 00:28:35.171 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2805085 00:28:35.171 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2805085 00:28:35.432 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.432 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.432 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.432 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:28:35.432 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:28:35.432 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.433 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.433 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.433 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.433 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.433 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.433 06:39:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.980 00:28:37.980 real 0m11.836s 00:28:37.980 user 0m4.255s 00:28:37.980 sys 0m6.119s 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:37.980 ************************************ 00:28:37.980 END TEST nvmf_async_init 00:28:37.980 ************************************ 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.980 ************************************ 00:28:37.980 START TEST dma 00:28:37.980 ************************************ 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:37.980 * Looking for test storage... 00:28:37.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.980 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:37.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.981 --rc genhtml_branch_coverage=1 00:28:37.981 --rc genhtml_function_coverage=1 00:28:37.981 --rc genhtml_legend=1 00:28:37.981 --rc geninfo_all_blocks=1 00:28:37.981 --rc geninfo_unexecuted_blocks=1 00:28:37.981 00:28:37.981 ' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:37.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.981 --rc genhtml_branch_coverage=1 00:28:37.981 --rc genhtml_function_coverage=1 00:28:37.981 --rc genhtml_legend=1 00:28:37.981 --rc geninfo_all_blocks=1 00:28:37.981 --rc geninfo_unexecuted_blocks=1 00:28:37.981 00:28:37.981 ' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:37.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.981 --rc genhtml_branch_coverage=1 00:28:37.981 --rc genhtml_function_coverage=1 00:28:37.981 --rc genhtml_legend=1 00:28:37.981 --rc geninfo_all_blocks=1 00:28:37.981 --rc geninfo_unexecuted_blocks=1 00:28:37.981 00:28:37.981 ' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:37.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.981 --rc genhtml_branch_coverage=1 00:28:37.981 --rc genhtml_function_coverage=1 00:28:37.981 --rc genhtml_legend=1 00:28:37.981 --rc geninfo_all_blocks=1 00:28:37.981 --rc geninfo_unexecuted_blocks=1 00:28:37.981 00:28:37.981 ' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:37.981 00:28:37.981 real 0m0.238s 00:28:37.981 user 0m0.136s 00:28:37.981 sys 0m0.118s 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:37.981 ************************************ 00:28:37.981 END TEST dma 00:28:37.981 ************************************ 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.981 ************************************ 00:28:37.981 START TEST nvmf_identify 00:28:37.981 ************************************ 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:37.981 * Looking for test storage... 00:28:37.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.981 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.982 --rc genhtml_branch_coverage=1 00:28:37.982 --rc genhtml_function_coverage=1 00:28:37.982 --rc genhtml_legend=1 00:28:37.982 --rc geninfo_all_blocks=1 00:28:37.982 --rc geninfo_unexecuted_blocks=1 00:28:37.982 00:28:37.982 ' 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.982 --rc genhtml_branch_coverage=1 00:28:37.982 --rc genhtml_function_coverage=1 00:28:37.982 --rc genhtml_legend=1 00:28:37.982 --rc geninfo_all_blocks=1 00:28:37.982 --rc geninfo_unexecuted_blocks=1 00:28:37.982 00:28:37.982 ' 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.982 --rc genhtml_branch_coverage=1 00:28:37.982 --rc genhtml_function_coverage=1 00:28:37.982 --rc genhtml_legend=1 00:28:37.982 --rc geninfo_all_blocks=1 00:28:37.982 --rc geninfo_unexecuted_blocks=1 00:28:37.982 00:28:37.982 ' 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:37.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.982 --rc genhtml_branch_coverage=1 00:28:37.982 --rc genhtml_function_coverage=1 00:28:37.982 --rc genhtml_legend=1 00:28:37.982 --rc geninfo_all_blocks=1 00:28:37.982 --rc geninfo_unexecuted_blocks=1 00:28:37.982 00:28:37.982 ' 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.982 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:38.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.244 06:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:46.388 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:46.388 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:46.388 Found net devices under 0000:31:00.0: cvl_0_0 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:46.388 Found net devices under 0000:31:00.1: cvl_0_1 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:28:46.388 00:28:46.388 --- 10.0.0.2 ping statistics --- 00:28:46.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.388 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:28:46.388 00:28:46.388 --- 10.0.0.1 ping statistics --- 00:28:46.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.388 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:46.388 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2809866 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2809866 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2809866 ']' 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:46.389 06:40:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.389 [2024-11-20 06:40:05.704056] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:46.389 [2024-11-20 06:40:05.704124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.389 [2024-11-20 06:40:05.805910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.389 [2024-11-20 06:40:05.860627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.389 [2024-11-20 06:40:05.860680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.389 [2024-11-20 06:40:05.860689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.389 [2024-11-20 06:40:05.860697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.389 [2024-11-20 06:40:05.860704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.389 [2024-11-20 06:40:05.863159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.389 [2024-11-20 06:40:05.863320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.389 [2024-11-20 06:40:05.863479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.389 [2024-11-20 06:40:05.863480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.650 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:46.650 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:28:46.650 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.650 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.650 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.650 [2024-11-20 06:40:06.528879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.651 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.651 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:46.651 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.651 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.913 Malloc0 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.913 [2024-11-20 06:40:06.653601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:46.913 [ 00:28:46.913 { 00:28:46.913 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:46.913 "subtype": "Discovery", 00:28:46.913 "listen_addresses": [ 00:28:46.913 { 00:28:46.913 "trtype": "TCP", 00:28:46.913 "adrfam": "IPv4", 00:28:46.913 "traddr": "10.0.0.2", 00:28:46.913 "trsvcid": "4420" 00:28:46.913 } 00:28:46.913 ], 00:28:46.913 "allow_any_host": true, 00:28:46.913 "hosts": [] 00:28:46.913 }, 00:28:46.913 { 00:28:46.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.913 "subtype": "NVMe", 00:28:46.913 "listen_addresses": [ 00:28:46.913 { 00:28:46.913 "trtype": "TCP", 00:28:46.913 "adrfam": "IPv4", 00:28:46.913 "traddr": "10.0.0.2", 00:28:46.913 "trsvcid": "4420" 00:28:46.913 } 00:28:46.913 ], 00:28:46.913 "allow_any_host": true, 00:28:46.913 "hosts": [], 00:28:46.913 "serial_number": "SPDK00000000000001", 00:28:46.913 "model_number": "SPDK bdev Controller", 00:28:46.913 "max_namespaces": 32, 00:28:46.913 "min_cntlid": 1, 00:28:46.913 "max_cntlid": 65519, 00:28:46.913 "namespaces": [ 00:28:46.913 { 00:28:46.913 "nsid": 1, 00:28:46.913 "bdev_name": "Malloc0", 00:28:46.913 "name": "Malloc0", 00:28:46.913 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:46.913 "eui64": "ABCDEF0123456789", 00:28:46.913 "uuid": "93c053d3-6c80-4f60-a01f-df9d0e41c21d" 00:28:46.913 } 00:28:46.913 ] 00:28:46.913 } 00:28:46.913 ] 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.913 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:46.913 [2024-11-20 06:40:06.717908] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:46.913 [2024-11-20 06:40:06.717952] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810096 ] 00:28:46.914 [2024-11-20 06:40:06.774614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:28:46.914 [2024-11-20 06:40:06.774684] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:46.914 [2024-11-20 06:40:06.774691] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:46.914 [2024-11-20 06:40:06.774709] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:46.914 [2024-11-20 06:40:06.774724] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:46.914 [2024-11-20 06:40:06.779166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:28:46.914 [2024-11-20 06:40:06.779219] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x602550 0 00:28:46.914 [2024-11-20 06:40:06.779473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:46.914 [2024-11-20 06:40:06.779483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:46.914 [2024-11-20 06:40:06.779488] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:46.914 [2024-11-20 06:40:06.779492] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:46.914 [2024-11-20 06:40:06.779531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.779538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.779543] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.914 [2024-11-20 06:40:06.779561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:46.914 [2024-11-20 06:40:06.779576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.914 [2024-11-20 06:40:06.786761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.914 [2024-11-20 06:40:06.786772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.914 [2024-11-20 06:40:06.786776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.786781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.914 [2024-11-20 06:40:06.786796] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:46.914 [2024-11-20 06:40:06.786805] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:28:46.914 [2024-11-20 06:40:06.786811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:28:46.914 [2024-11-20 06:40:06.786828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.786832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.786836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.914 [2024-11-20 06:40:06.786850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.914 [2024-11-20 06:40:06.786864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.914 [2024-11-20 06:40:06.787086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.914 [2024-11-20 06:40:06.787094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.914 [2024-11-20 06:40:06.787098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.914 [2024-11-20 06:40:06.787108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:28:46.914 [2024-11-20 06:40:06.787115] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:28:46.914 [2024-11-20 06:40:06.787123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.914 [2024-11-20 06:40:06.787137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.914 [2024-11-20 06:40:06.787147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.914 [2024-11-20 06:40:06.787373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.914 [2024-11-20 06:40:06.787380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.914 [2024-11-20 06:40:06.787384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.914 [2024-11-20 06:40:06.787394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:28:46.914 [2024-11-20 06:40:06.787402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:46.914 [2024-11-20 06:40:06.787409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.914 [2024-11-20 06:40:06.787424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.914 [2024-11-20 06:40:06.787434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.914 [2024-11-20 06:40:06.787603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.914 [2024-11-20 06:40:06.787610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.914 [2024-11-20 06:40:06.787613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.914 [2024-11-20 06:40:06.787623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:46.914 [2024-11-20 06:40:06.787633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.914 [2024-11-20 06:40:06.787647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.914 [2024-11-20 06:40:06.787657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.914 [2024-11-20 06:40:06.787826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.914 [2024-11-20 06:40:06.787833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.914 [2024-11-20 06:40:06.787836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.914 [2024-11-20 06:40:06.787846] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:46.914 [2024-11-20 06:40:06.787851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:46.914 [2024-11-20 06:40:06.787859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:46.914 [2024-11-20 06:40:06.787968] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:28:46.914 [2024-11-20 06:40:06.787973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:46.914 [2024-11-20 06:40:06.787985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.787992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.914 [2024-11-20 06:40:06.787999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.914 [2024-11-20 06:40:06.788010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.914 [2024-11-20 06:40:06.788207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.914 [2024-11-20 06:40:06.788213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.914 [2024-11-20 06:40:06.788217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.914 [2024-11-20 06:40:06.788220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.914 [2024-11-20 06:40:06.788225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:46.915 [2024-11-20 06:40:06.788235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.788250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.915 [2024-11-20 06:40:06.788260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.915 [2024-11-20 06:40:06.788446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.915 [2024-11-20 06:40:06.788452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.915 [2024-11-20 06:40:06.788456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.915 [2024-11-20 06:40:06.788464] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:46.915 [2024-11-20 06:40:06.788469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:46.915 [2024-11-20 06:40:06.788477] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:28:46.915 [2024-11-20 06:40:06.788490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:46.915 [2024-11-20 06:40:06.788502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.788513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.915 [2024-11-20 06:40:06.788524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.915 [2024-11-20 06:40:06.788755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:46.915 [2024-11-20 06:40:06.788762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:46.915 [2024-11-20 06:40:06.788766] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788771] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x602550): datao=0, datal=4096, cccid=0 00:28:46.915 [2024-11-20 06:40:06.788776] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x664100) on tqpair(0x602550): expected_datao=0, payload_size=4096 00:28:46.915 [2024-11-20 06:40:06.788781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788790] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788795] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.915 [2024-11-20 06:40:06.788929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.915 [2024-11-20 06:40:06.788933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.915 [2024-11-20 06:40:06.788946] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:28:46.915 [2024-11-20 06:40:06.788951] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:28:46.915 [2024-11-20 06:40:06.788956] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:28:46.915 [2024-11-20 06:40:06.788965] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:28:46.915 [2024-11-20 06:40:06.788970] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:28:46.915 [2024-11-20 06:40:06.788975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:28:46.915 [2024-11-20 06:40:06.788986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:46.915 [2024-11-20 06:40:06.788994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.788998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.789009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:46.915 [2024-11-20 06:40:06.789020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.915 [2024-11-20 06:40:06.789203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.915 [2024-11-20 06:40:06.789210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.915 [2024-11-20 06:40:06.789213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:46.915 [2024-11-20 06:40:06.789226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.789243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.915 [2024-11-20 06:40:06.789249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.789263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.915 [2024-11-20 06:40:06.789269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.789282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.915 [2024-11-20 06:40:06.789288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.789302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.915 [2024-11-20 06:40:06.789306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:46.915 [2024-11-20 06:40:06.789315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:46.915 [2024-11-20 06:40:06.789322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.789333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.915 [2024-11-20 06:40:06.789344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664100, cid 0, qid 0 00:28:46.915 [2024-11-20 06:40:06.789350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664280, cid 1, qid 0 00:28:46.915 [2024-11-20 06:40:06.789354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664400, cid 2, qid 0 00:28:46.915 [2024-11-20 06:40:06.789359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:46.915 [2024-11-20 06:40:06.789364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664700, cid 4, qid 0 00:28:46.915 [2024-11-20 06:40:06.789622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:46.915 [2024-11-20 06:40:06.789630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:46.915 [2024-11-20 06:40:06.789634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664700) on tqpair=0x602550 00:28:46.915 [2024-11-20 06:40:06.789646] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:28:46.915 [2024-11-20 06:40:06.789652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:28:46.915 [2024-11-20 06:40:06.789663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x602550) 00:28:46.915 [2024-11-20 06:40:06.789676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.915 [2024-11-20 06:40:06.789686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664700, cid 4, qid 0 00:28:46.915 [2024-11-20 06:40:06.789876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:46.915 [2024-11-20 06:40:06.789883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:46.915 [2024-11-20 06:40:06.789887] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:46.915 [2024-11-20 06:40:06.789890] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x602550): datao=0, datal=4096, cccid=4 00:28:46.916 [2024-11-20 06:40:06.789895] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x664700) on tqpair(0x602550): expected_datao=0, payload_size=4096 00:28:46.916 [2024-11-20 06:40:06.789900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:46.916 [2024-11-20 06:40:06.789913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:46.916 [2024-11-20 06:40:06.789917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.834757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.182 [2024-11-20 06:40:06.834770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.182 [2024-11-20 06:40:06.834774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.834778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664700) on tqpair=0x602550 00:28:47.182 [2024-11-20 06:40:06.834797] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:28:47.182 [2024-11-20 06:40:06.834829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.834834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x602550) 00:28:47.182 [2024-11-20 06:40:06.834843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.182 [2024-11-20 06:40:06.834852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.834856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.834859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x602550) 00:28:47.182 [2024-11-20 06:40:06.834866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.182 [2024-11-20 06:40:06.834884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664700, cid 4, qid 0 00:28:47.182 [2024-11-20 06:40:06.834889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664880, cid 5, qid 0 00:28:47.182 [2024-11-20 06:40:06.835160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.182 [2024-11-20 06:40:06.835167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.182 [2024-11-20 06:40:06.835171] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.835175] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x602550): datao=0, datal=1024, cccid=4 00:28:47.182 [2024-11-20 06:40:06.835179] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x664700) on tqpair(0x602550): expected_datao=0, payload_size=1024 00:28:47.182 [2024-11-20 06:40:06.835184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.835191] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.835195] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.835200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.182 [2024-11-20 06:40:06.835206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.182 [2024-11-20 06:40:06.835210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.835214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664880) on tqpair=0x602550 00:28:47.182 [2024-11-20 06:40:06.876984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.182 [2024-11-20 06:40:06.876996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.182 [2024-11-20 06:40:06.877000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664700) on tqpair=0x602550 00:28:47.182 [2024-11-20 06:40:06.877018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x602550) 00:28:47.182 [2024-11-20 06:40:06.877030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.182 [2024-11-20 06:40:06.877047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664700, cid 4, qid 0 00:28:47.182 [2024-11-20 06:40:06.877318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.182 [2024-11-20 06:40:06.877325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.182 [2024-11-20 06:40:06.877329] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877332] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x602550): datao=0, datal=3072, cccid=4 00:28:47.182 [2024-11-20 06:40:06.877337] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x664700) on tqpair(0x602550): expected_datao=0, payload_size=3072 00:28:47.182 [2024-11-20 06:40:06.877341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877348] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877352] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.182 [2024-11-20 06:40:06.877498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.182 [2024-11-20 06:40:06.877501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664700) on tqpair=0x602550 00:28:47.182 [2024-11-20 06:40:06.877514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x602550) 00:28:47.182 [2024-11-20 06:40:06.877524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.182 [2024-11-20 06:40:06.877538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664700, cid 4, qid 0 00:28:47.182 [2024-11-20 06:40:06.877781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.182 [2024-11-20 06:40:06.877787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.182 [2024-11-20 06:40:06.877791] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877795] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x602550): datao=0, datal=8, cccid=4 00:28:47.182 [2024-11-20 06:40:06.877799] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x664700) on tqpair(0x602550): expected_datao=0, payload_size=8 00:28:47.182 [2024-11-20 06:40:06.877803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877810] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.877813] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.918915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.182 [2024-11-20 06:40:06.918926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.182 [2024-11-20 06:40:06.918929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.182 [2024-11-20 06:40:06.918935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664700) on tqpair=0x602550 00:28:47.182 ===================================================== 00:28:47.182 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:47.182 ===================================================== 00:28:47.182 Controller Capabilities/Features 00:28:47.182 ================================ 00:28:47.182 Vendor ID: 0000 00:28:47.182 Subsystem Vendor ID: 0000 00:28:47.182 Serial Number: .................... 00:28:47.182 Model Number: ........................................ 00:28:47.182 Firmware Version: 25.01 00:28:47.182 Recommended Arb Burst: 0 00:28:47.182 IEEE OUI Identifier: 00 00 00 00:28:47.182 Multi-path I/O 00:28:47.182 May have multiple subsystem ports: No 00:28:47.183 May have multiple controllers: No 00:28:47.183 Associated with SR-IOV VF: No 00:28:47.183 Max Data Transfer Size: 131072 00:28:47.183 Max Number of Namespaces: 0 00:28:47.183 Max Number of I/O Queues: 1024 00:28:47.183 NVMe Specification Version (VS): 1.3 00:28:47.183 NVMe Specification Version (Identify): 1.3 00:28:47.183 Maximum Queue Entries: 128 00:28:47.183 Contiguous Queues Required: Yes 00:28:47.183 Arbitration Mechanisms Supported 00:28:47.183 Weighted Round Robin: Not Supported 00:28:47.183 Vendor Specific: Not Supported 00:28:47.183 Reset Timeout: 15000 ms 00:28:47.183 Doorbell Stride: 4 bytes 00:28:47.183 NVM Subsystem Reset: Not Supported 00:28:47.183 Command Sets Supported 00:28:47.183 NVM Command Set: Supported 00:28:47.183 Boot Partition: Not Supported 00:28:47.183 Memory Page Size Minimum: 4096 bytes 00:28:47.183 Memory Page Size Maximum: 4096 bytes 00:28:47.183 Persistent Memory Region: Not Supported 00:28:47.183 Optional Asynchronous Events Supported 00:28:47.183 Namespace Attribute Notices: Not Supported 00:28:47.183 Firmware Activation Notices: Not Supported 00:28:47.183 ANA Change Notices: Not Supported 00:28:47.183 PLE Aggregate Log Change Notices: Not Supported 00:28:47.183 LBA Status Info Alert Notices: Not Supported 00:28:47.183 EGE Aggregate Log Change Notices: Not Supported 00:28:47.183 Normal NVM Subsystem Shutdown event: Not Supported 00:28:47.183 Zone Descriptor Change Notices: Not Supported 00:28:47.183 Discovery Log Change Notices: Supported 00:28:47.183 Controller Attributes 00:28:47.183 128-bit Host Identifier: Not Supported 00:28:47.183 Non-Operational Permissive Mode: Not Supported 00:28:47.183 NVM Sets: Not Supported 00:28:47.183 Read Recovery Levels: Not Supported 00:28:47.183 Endurance Groups: Not Supported 00:28:47.183 Predictable Latency Mode: Not Supported 00:28:47.183 Traffic Based Keep ALive: Not Supported 00:28:47.183 Namespace Granularity: Not Supported 00:28:47.183 SQ Associations: Not Supported 00:28:47.183 UUID List: Not Supported 00:28:47.183 Multi-Domain Subsystem: Not Supported 00:28:47.183 Fixed Capacity Management: Not Supported 00:28:47.183 Variable Capacity Management: Not Supported 00:28:47.183 Delete Endurance Group: Not Supported 00:28:47.183 Delete NVM Set: Not Supported 00:28:47.183 Extended LBA Formats Supported: Not Supported 00:28:47.183 Flexible Data Placement Supported: Not Supported 00:28:47.183 00:28:47.183 Controller Memory Buffer Support 00:28:47.183 ================================ 00:28:47.183 Supported: No 00:28:47.183 00:28:47.183 Persistent Memory Region Support 00:28:47.183 ================================ 00:28:47.183 Supported: No 00:28:47.183 00:28:47.183 Admin Command Set Attributes 00:28:47.183 ============================ 00:28:47.183 Security Send/Receive: Not Supported 00:28:47.183 Format NVM: Not Supported 00:28:47.183 Firmware Activate/Download: Not Supported 00:28:47.183 Namespace Management: Not Supported 00:28:47.183 Device Self-Test: Not Supported 00:28:47.183 Directives: Not Supported 00:28:47.183 NVMe-MI: Not Supported 00:28:47.183 Virtualization Management: Not Supported 00:28:47.183 Doorbell Buffer Config: Not Supported 00:28:47.183 Get LBA Status Capability: Not Supported 00:28:47.183 Command & Feature Lockdown Capability: Not Supported 00:28:47.183 Abort Command Limit: 1 00:28:47.183 Async Event Request Limit: 4 00:28:47.183 Number of Firmware Slots: N/A 00:28:47.183 Firmware Slot 1 Read-Only: N/A 00:28:47.183 Firmware Activation Without Reset: N/A 00:28:47.183 Multiple Update Detection Support: N/A 00:28:47.183 Firmware Update Granularity: No Information Provided 00:28:47.183 Per-Namespace SMART Log: No 00:28:47.183 Asymmetric Namespace Access Log Page: Not Supported 00:28:47.183 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:47.183 Command Effects Log Page: Not Supported 00:28:47.183 Get Log Page Extended Data: Supported 00:28:47.183 Telemetry Log Pages: Not Supported 00:28:47.183 Persistent Event Log Pages: Not Supported 00:28:47.183 Supported Log Pages Log Page: May Support 00:28:47.183 Commands Supported & Effects Log Page: Not Supported 00:28:47.183 Feature Identifiers & Effects Log Page:May Support 00:28:47.183 NVMe-MI Commands & Effects Log Page: May Support 00:28:47.183 Data Area 4 for Telemetry Log: Not Supported 00:28:47.183 Error Log Page Entries Supported: 128 00:28:47.183 Keep Alive: Not Supported 00:28:47.183 00:28:47.183 NVM Command Set Attributes 00:28:47.183 ========================== 00:28:47.183 Submission Queue Entry Size 00:28:47.183 Max: 1 00:28:47.183 Min: 1 00:28:47.183 Completion Queue Entry Size 00:28:47.183 Max: 1 00:28:47.183 Min: 1 00:28:47.183 Number of Namespaces: 0 00:28:47.183 Compare Command: Not Supported 00:28:47.183 Write Uncorrectable Command: Not Supported 00:28:47.183 Dataset Management Command: Not Supported 00:28:47.183 Write Zeroes Command: Not Supported 00:28:47.183 Set Features Save Field: Not Supported 00:28:47.183 Reservations: Not Supported 00:28:47.183 Timestamp: Not Supported 00:28:47.183 Copy: Not Supported 00:28:47.183 Volatile Write Cache: Not Present 00:28:47.183 Atomic Write Unit (Normal): 1 00:28:47.183 Atomic Write Unit (PFail): 1 00:28:47.183 Atomic Compare & Write Unit: 1 00:28:47.183 Fused Compare & Write: Supported 00:28:47.183 Scatter-Gather List 00:28:47.183 SGL Command Set: Supported 00:28:47.183 SGL Keyed: Supported 00:28:47.183 SGL Bit Bucket Descriptor: Not Supported 00:28:47.183 SGL Metadata Pointer: Not Supported 00:28:47.183 Oversized SGL: Not Supported 00:28:47.183 SGL Metadata Address: Not Supported 00:28:47.183 SGL Offset: Supported 00:28:47.183 Transport SGL Data Block: Not Supported 00:28:47.183 Replay Protected Memory Block: Not Supported 00:28:47.183 00:28:47.183 Firmware Slot Information 00:28:47.183 ========================= 00:28:47.183 Active slot: 0 00:28:47.183 00:28:47.183 00:28:47.183 Error Log 00:28:47.183 ========= 00:28:47.183 00:28:47.183 Active Namespaces 00:28:47.183 ================= 00:28:47.183 Discovery Log Page 00:28:47.183 ================== 00:28:47.183 Generation Counter: 2 00:28:47.183 Number of Records: 2 00:28:47.183 Record Format: 0 00:28:47.183 00:28:47.183 Discovery Log Entry 0 00:28:47.183 ---------------------- 00:28:47.183 Transport Type: 3 (TCP) 00:28:47.183 Address Family: 1 (IPv4) 00:28:47.183 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:47.183 Entry Flags: 00:28:47.183 Duplicate Returned Information: 1 00:28:47.183 Explicit Persistent Connection Support for Discovery: 1 00:28:47.183 Transport Requirements: 00:28:47.183 Secure Channel: Not Required 00:28:47.183 Port ID: 0 (0x0000) 00:28:47.183 Controller ID: 65535 (0xffff) 00:28:47.183 Admin Max SQ Size: 128 00:28:47.183 Transport Service Identifier: 4420 00:28:47.183 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:47.183 Transport Address: 10.0.0.2 00:28:47.183 Discovery Log Entry 1 00:28:47.183 ---------------------- 00:28:47.183 Transport Type: 3 (TCP) 00:28:47.183 Address Family: 1 (IPv4) 00:28:47.183 Subsystem Type: 2 (NVM Subsystem) 00:28:47.183 Entry Flags: 00:28:47.183 Duplicate Returned Information: 0 00:28:47.183 Explicit Persistent Connection Support for Discovery: 0 00:28:47.183 Transport Requirements: 00:28:47.183 Secure Channel: Not Required 00:28:47.183 Port ID: 0 (0x0000) 00:28:47.183 Controller ID: 65535 (0xffff) 00:28:47.183 Admin Max SQ Size: 128 00:28:47.183 Transport Service Identifier: 4420 00:28:47.183 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:47.183 Transport Address: 10.0.0.2 [2024-11-20 06:40:06.919044] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:28:47.183 [2024-11-20 06:40:06.919058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664100) on tqpair=0x602550 00:28:47.183 [2024-11-20 06:40:06.919067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.183 [2024-11-20 06:40:06.919075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664280) on tqpair=0x602550 00:28:47.183 [2024-11-20 06:40:06.919081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.183 [2024-11-20 06:40:06.919087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664400) on tqpair=0x602550 00:28:47.183 [2024-11-20 06:40:06.919091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.183 [2024-11-20 06:40:06.919097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.183 [2024-11-20 06:40:06.919101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.184 [2024-11-20 06:40:06.919114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.919131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.919146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.919236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.919246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.919251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.919263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.919278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.919291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.919499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.919507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.919511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.919521] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:28:47.184 [2024-11-20 06:40:06.919526] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:28:47.184 [2024-11-20 06:40:06.919537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.919552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.919563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.919766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.919773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.919777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.919792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.919800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.919808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.919819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.920018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.920024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.920028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.920042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.920057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.920068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.920262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.920269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.920273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.920287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.920303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.920313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.920556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.920563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.920566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.920581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.920589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.920596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.920607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.924759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.924772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.924775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.924779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.924789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.924793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.924797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x602550) 00:28:47.184 [2024-11-20 06:40:06.924805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.184 [2024-11-20 06:40:06.924816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x664580, cid 3, qid 0 00:28:47.184 [2024-11-20 06:40:06.925014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:06.925020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:06.925024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:06.925029] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x664580) on tqpair=0x602550 00:28:47.184 [2024-11-20 06:40:06.925036] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:28:47.184 00:28:47.184 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:47.184 [2024-11-20 06:40:06.976700] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:47.184 [2024-11-20 06:40:06.976769] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810216 ] 00:28:47.184 [2024-11-20 06:40:07.039206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:28:47.184 [2024-11-20 06:40:07.039264] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:47.184 [2024-11-20 06:40:07.039269] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:47.184 [2024-11-20 06:40:07.039284] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:47.184 [2024-11-20 06:40:07.039294] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:47.184 [2024-11-20 06:40:07.040034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:28:47.184 [2024-11-20 06:40:07.040070] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xafa550 0 00:28:47.184 [2024-11-20 06:40:07.053758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:47.184 [2024-11-20 06:40:07.053786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:47.184 [2024-11-20 06:40:07.053791] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:47.184 [2024-11-20 06:40:07.053794] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:47.184 [2024-11-20 06:40:07.053829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:07.053835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:07.053839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.184 [2024-11-20 06:40:07.053853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:47.184 [2024-11-20 06:40:07.053882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.184 [2024-11-20 06:40:07.057756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.184 [2024-11-20 06:40:07.057766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.184 [2024-11-20 06:40:07.057770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.184 [2024-11-20 06:40:07.057775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.184 [2024-11-20 06:40:07.057788] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:47.184 [2024-11-20 06:40:07.057796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:28:47.185 [2024-11-20 06:40:07.057801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:28:47.185 [2024-11-20 06:40:07.057815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.057819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.057823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.057832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.185 [2024-11-20 06:40:07.057848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.185 [2024-11-20 06:40:07.058047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.185 [2024-11-20 06:40:07.058053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.185 [2024-11-20 06:40:07.058057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.185 [2024-11-20 06:40:07.058066] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:28:47.185 [2024-11-20 06:40:07.058074] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:28:47.185 [2024-11-20 06:40:07.058080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.058095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.185 [2024-11-20 06:40:07.058105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.185 [2024-11-20 06:40:07.058295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.185 [2024-11-20 06:40:07.058301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.185 [2024-11-20 06:40:07.058305] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058309] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.185 [2024-11-20 06:40:07.058314] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:28:47.185 [2024-11-20 06:40:07.058323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:47.185 [2024-11-20 06:40:07.058330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.058344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.185 [2024-11-20 06:40:07.058354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.185 [2024-11-20 06:40:07.058548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.185 [2024-11-20 06:40:07.058555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.185 [2024-11-20 06:40:07.058558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.185 [2024-11-20 06:40:07.058567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:47.185 [2024-11-20 06:40:07.058576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.058591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.185 [2024-11-20 06:40:07.058601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.185 [2024-11-20 06:40:07.058800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.185 [2024-11-20 06:40:07.058807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.185 [2024-11-20 06:40:07.058810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.185 [2024-11-20 06:40:07.058819] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:47.185 [2024-11-20 06:40:07.058824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:47.185 [2024-11-20 06:40:07.058832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:47.185 [2024-11-20 06:40:07.058940] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:28:47.185 [2024-11-20 06:40:07.058945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:47.185 [2024-11-20 06:40:07.058954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.058961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.058968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.185 [2024-11-20 06:40:07.058979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.185 [2024-11-20 06:40:07.059187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.185 [2024-11-20 06:40:07.059193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.185 [2024-11-20 06:40:07.059196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.185 [2024-11-20 06:40:07.059205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:47.185 [2024-11-20 06:40:07.059214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.059228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.185 [2024-11-20 06:40:07.059241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.185 [2024-11-20 06:40:07.059448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.185 [2024-11-20 06:40:07.059454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.185 [2024-11-20 06:40:07.059458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.185 [2024-11-20 06:40:07.059466] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:47.185 [2024-11-20 06:40:07.059471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:47.185 [2024-11-20 06:40:07.059479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:28:47.185 [2024-11-20 06:40:07.059490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:47.185 [2024-11-20 06:40:07.059500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.059510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.185 [2024-11-20 06:40:07.059520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.185 [2024-11-20 06:40:07.059827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.185 [2024-11-20 06:40:07.059833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.185 [2024-11-20 06:40:07.059837] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059842] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=4096, cccid=0 00:28:47.185 [2024-11-20 06:40:07.059846] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5c100) on tqpair(0xafa550): expected_datao=0, payload_size=4096 00:28:47.185 [2024-11-20 06:40:07.059851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059859] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.059863] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.060052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.185 [2024-11-20 06:40:07.060058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.185 [2024-11-20 06:40:07.060062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.060066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.185 [2024-11-20 06:40:07.060074] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:28:47.185 [2024-11-20 06:40:07.060079] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:28:47.185 [2024-11-20 06:40:07.060083] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:28:47.185 [2024-11-20 06:40:07.060090] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:28:47.185 [2024-11-20 06:40:07.060095] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:28:47.185 [2024-11-20 06:40:07.060100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:28:47.185 [2024-11-20 06:40:07.060110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:47.185 [2024-11-20 06:40:07.060117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.060124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.185 [2024-11-20 06:40:07.060128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.185 [2024-11-20 06:40:07.060135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:47.185 [2024-11-20 06:40:07.060146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.186 [2024-11-20 06:40:07.060355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.186 [2024-11-20 06:40:07.060362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.186 [2024-11-20 06:40:07.060365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.186 [2024-11-20 06:40:07.060377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.060391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.186 [2024-11-20 06:40:07.060397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.060410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.186 [2024-11-20 06:40:07.060416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.060429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.186 [2024-11-20 06:40:07.060435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.060448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.186 [2024-11-20 06:40:07.060453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.060461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.060468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.060478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.186 [2024-11-20 06:40:07.060490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c100, cid 0, qid 0 00:28:47.186 [2024-11-20 06:40:07.060495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c280, cid 1, qid 0 00:28:47.186 [2024-11-20 06:40:07.060500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c400, cid 2, qid 0 00:28:47.186 [2024-11-20 06:40:07.060504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.186 [2024-11-20 06:40:07.060511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c700, cid 4, qid 0 00:28:47.186 [2024-11-20 06:40:07.060778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.186 [2024-11-20 06:40:07.060785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.186 [2024-11-20 06:40:07.060789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c700) on tqpair=0xafa550 00:28:47.186 [2024-11-20 06:40:07.060800] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:28:47.186 [2024-11-20 06:40:07.060806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.060815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.060822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.060828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.060836] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.060842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:47.186 [2024-11-20 06:40:07.060853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c700, cid 4, qid 0 00:28:47.186 [2024-11-20 06:40:07.061063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.186 [2024-11-20 06:40:07.061069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.186 [2024-11-20 06:40:07.061072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c700) on tqpair=0xafa550 00:28:47.186 [2024-11-20 06:40:07.061142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.061152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.061160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.061170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.186 [2024-11-20 06:40:07.061181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c700, cid 4, qid 0 00:28:47.186 [2024-11-20 06:40:07.061416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.186 [2024-11-20 06:40:07.061423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.186 [2024-11-20 06:40:07.061426] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061430] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=4096, cccid=4 00:28:47.186 [2024-11-20 06:40:07.061434] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5c700) on tqpair(0xafa550): expected_datao=0, payload_size=4096 00:28:47.186 [2024-11-20 06:40:07.061439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061446] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061450] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.186 [2024-11-20 06:40:07.061615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.186 [2024-11-20 06:40:07.061621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c700) on tqpair=0xafa550 00:28:47.186 [2024-11-20 06:40:07.061635] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:28:47.186 [2024-11-20 06:40:07.061644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.061654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.061661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafa550) 00:28:47.186 [2024-11-20 06:40:07.061671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.186 [2024-11-20 06:40:07.061682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c700, cid 4, qid 0 00:28:47.186 [2024-11-20 06:40:07.061919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.186 [2024-11-20 06:40:07.061925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.186 [2024-11-20 06:40:07.061929] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061932] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=4096, cccid=4 00:28:47.186 [2024-11-20 06:40:07.061937] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5c700) on tqpair(0xafa550): expected_datao=0, payload_size=4096 00:28:47.186 [2024-11-20 06:40:07.061941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061948] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.061951] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.062096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.186 [2024-11-20 06:40:07.062102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.186 [2024-11-20 06:40:07.062106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.186 [2024-11-20 06:40:07.062109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c700) on tqpair=0xafa550 00:28:47.186 [2024-11-20 06:40:07.062123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.062134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:47.186 [2024-11-20 06:40:07.062141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.062151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.062162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c700, cid 4, qid 0 00:28:47.187 [2024-11-20 06:40:07.062422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.187 [2024-11-20 06:40:07.062428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.187 [2024-11-20 06:40:07.062432] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062435] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=4096, cccid=4 00:28:47.187 [2024-11-20 06:40:07.062440] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5c700) on tqpair(0xafa550): expected_datao=0, payload_size=4096 00:28:47.187 [2024-11-20 06:40:07.062444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062453] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062457] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.187 [2024-11-20 06:40:07.062594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.187 [2024-11-20 06:40:07.062597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c700) on tqpair=0xafa550 00:28:47.187 [2024-11-20 06:40:07.062609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:47.187 [2024-11-20 06:40:07.062617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:28:47.187 [2024-11-20 06:40:07.062626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:28:47.187 [2024-11-20 06:40:07.062633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:47.187 [2024-11-20 06:40:07.062638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:47.187 [2024-11-20 06:40:07.062643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:28:47.187 [2024-11-20 06:40:07.062649] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:28:47.187 [2024-11-20 06:40:07.062654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:28:47.187 [2024-11-20 06:40:07.062659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:28:47.187 [2024-11-20 06:40:07.062674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.062685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.062692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.062699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.062706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.187 [2024-11-20 06:40:07.062719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c700, cid 4, qid 0 00:28:47.187 [2024-11-20 06:40:07.062724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c880, cid 5, qid 0 00:28:47.187 [2024-11-20 06:40:07.062991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.187 [2024-11-20 06:40:07.062998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.187 [2024-11-20 06:40:07.063001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c700) on tqpair=0xafa550 00:28:47.187 [2024-11-20 06:40:07.063012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.187 [2024-11-20 06:40:07.063017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.187 [2024-11-20 06:40:07.063021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c880) on tqpair=0xafa550 00:28:47.187 [2024-11-20 06:40:07.063034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.063047] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.063057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c880, cid 5, qid 0 00:28:47.187 [2024-11-20 06:40:07.063294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.187 [2024-11-20 06:40:07.063300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.187 [2024-11-20 06:40:07.063304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c880) on tqpair=0xafa550 00:28:47.187 [2024-11-20 06:40:07.063317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.063327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.063337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c880, cid 5, qid 0 00:28:47.187 [2024-11-20 06:40:07.063545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.187 [2024-11-20 06:40:07.063551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.187 [2024-11-20 06:40:07.063554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c880) on tqpair=0xafa550 00:28:47.187 [2024-11-20 06:40:07.063567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.063578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.063588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c880, cid 5, qid 0 00:28:47.187 [2024-11-20 06:40:07.063808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.187 [2024-11-20 06:40:07.063814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.187 [2024-11-20 06:40:07.063818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c880) on tqpair=0xafa550 00:28:47.187 [2024-11-20 06:40:07.063836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.063846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.063854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.063864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.063871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.063881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.063888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.063892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xafa550) 00:28:47.187 [2024-11-20 06:40:07.063903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.187 [2024-11-20 06:40:07.063915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c880, cid 5, qid 0 00:28:47.187 [2024-11-20 06:40:07.063920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c700, cid 4, qid 0 00:28:47.187 [2024-11-20 06:40:07.063925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5ca00, cid 6, qid 0 00:28:47.187 [2024-11-20 06:40:07.063930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5cb80, cid 7, qid 0 00:28:47.187 [2024-11-20 06:40:07.064219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.187 [2024-11-20 06:40:07.064226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.187 [2024-11-20 06:40:07.064229] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.064233] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=8192, cccid=5 00:28:47.187 [2024-11-20 06:40:07.064237] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5c880) on tqpair(0xafa550): expected_datao=0, payload_size=8192 00:28:47.187 [2024-11-20 06:40:07.064242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.064347] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.064351] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.064357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.187 [2024-11-20 06:40:07.064363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.187 [2024-11-20 06:40:07.064366] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.187 [2024-11-20 06:40:07.064370] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=512, cccid=4 00:28:47.188 [2024-11-20 06:40:07.064374] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5c700) on tqpair(0xafa550): expected_datao=0, payload_size=512 00:28:47.188 [2024-11-20 06:40:07.064378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064385] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064388] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.188 [2024-11-20 06:40:07.064400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.188 [2024-11-20 06:40:07.064403] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064406] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=512, cccid=6 00:28:47.188 [2024-11-20 06:40:07.064411] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5ca00) on tqpair(0xafa550): expected_datao=0, payload_size=512 00:28:47.188 [2024-11-20 06:40:07.064415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064422] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064425] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:47.188 [2024-11-20 06:40:07.064436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:47.188 [2024-11-20 06:40:07.064440] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064443] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafa550): datao=0, datal=4096, cccid=7 00:28:47.188 [2024-11-20 06:40:07.064448] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb5cb80) on tqpair(0xafa550): expected_datao=0, payload_size=4096 00:28:47.188 [2024-11-20 06:40:07.064452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064469] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.188 [2024-11-20 06:40:07.064632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.188 [2024-11-20 06:40:07.064636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c880) on tqpair=0xafa550 00:28:47.188 [2024-11-20 06:40:07.064652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.188 [2024-11-20 06:40:07.064658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.188 [2024-11-20 06:40:07.064661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c700) on tqpair=0xafa550 00:28:47.188 [2024-11-20 06:40:07.064675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.188 [2024-11-20 06:40:07.064681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.188 [2024-11-20 06:40:07.064685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5ca00) on tqpair=0xafa550 00:28:47.188 [2024-11-20 06:40:07.064695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.188 [2024-11-20 06:40:07.064701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.188 [2024-11-20 06:40:07.064705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.188 [2024-11-20 06:40:07.064708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5cb80) on tqpair=0xafa550 00:28:47.188 ===================================================== 00:28:47.188 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.188 ===================================================== 00:28:47.188 Controller Capabilities/Features 00:28:47.188 ================================ 00:28:47.188 Vendor ID: 8086 00:28:47.188 Subsystem Vendor ID: 8086 00:28:47.188 Serial Number: SPDK00000000000001 00:28:47.188 Model Number: SPDK bdev Controller 00:28:47.188 Firmware Version: 25.01 00:28:47.188 Recommended Arb Burst: 6 00:28:47.188 IEEE OUI Identifier: e4 d2 5c 00:28:47.188 Multi-path I/O 00:28:47.188 May have multiple subsystem ports: Yes 00:28:47.188 May have multiple controllers: Yes 00:28:47.188 Associated with SR-IOV VF: No 00:28:47.188 Max Data Transfer Size: 131072 00:28:47.188 Max Number of Namespaces: 32 00:28:47.188 Max Number of I/O Queues: 127 00:28:47.188 NVMe Specification Version (VS): 1.3 00:28:47.188 NVMe Specification Version (Identify): 1.3 00:28:47.188 Maximum Queue Entries: 128 00:28:47.188 Contiguous Queues Required: Yes 00:28:47.188 Arbitration Mechanisms Supported 00:28:47.188 Weighted Round Robin: Not Supported 00:28:47.188 Vendor Specific: Not Supported 00:28:47.188 Reset Timeout: 15000 ms 00:28:47.188 Doorbell Stride: 4 bytes 00:28:47.188 NVM Subsystem Reset: Not Supported 00:28:47.188 Command Sets Supported 00:28:47.188 NVM Command Set: Supported 00:28:47.188 Boot Partition: Not Supported 00:28:47.188 Memory Page Size Minimum: 4096 bytes 00:28:47.188 Memory Page Size Maximum: 4096 bytes 00:28:47.188 Persistent Memory Region: Not Supported 00:28:47.188 Optional Asynchronous Events Supported 00:28:47.188 Namespace Attribute Notices: Supported 00:28:47.188 Firmware Activation Notices: Not Supported 00:28:47.188 ANA Change Notices: Not Supported 00:28:47.188 PLE Aggregate Log Change Notices: Not Supported 00:28:47.188 LBA Status Info Alert Notices: Not Supported 00:28:47.188 EGE Aggregate Log Change Notices: Not Supported 00:28:47.188 Normal NVM Subsystem Shutdown event: Not Supported 00:28:47.188 Zone Descriptor Change Notices: Not Supported 00:28:47.188 Discovery Log Change Notices: Not Supported 00:28:47.188 Controller Attributes 00:28:47.188 128-bit Host Identifier: Supported 00:28:47.188 Non-Operational Permissive Mode: Not Supported 00:28:47.188 NVM Sets: Not Supported 00:28:47.188 Read Recovery Levels: Not Supported 00:28:47.188 Endurance Groups: Not Supported 00:28:47.188 Predictable Latency Mode: Not Supported 00:28:47.188 Traffic Based Keep ALive: Not Supported 00:28:47.188 Namespace Granularity: Not Supported 00:28:47.188 SQ Associations: Not Supported 00:28:47.188 UUID List: Not Supported 00:28:47.188 Multi-Domain Subsystem: Not Supported 00:28:47.188 Fixed Capacity Management: Not Supported 00:28:47.188 Variable Capacity Management: Not Supported 00:28:47.188 Delete Endurance Group: Not Supported 00:28:47.188 Delete NVM Set: Not Supported 00:28:47.188 Extended LBA Formats Supported: Not Supported 00:28:47.188 Flexible Data Placement Supported: Not Supported 00:28:47.188 00:28:47.188 Controller Memory Buffer Support 00:28:47.188 ================================ 00:28:47.188 Supported: No 00:28:47.188 00:28:47.188 Persistent Memory Region Support 00:28:47.188 ================================ 00:28:47.188 Supported: No 00:28:47.188 00:28:47.188 Admin Command Set Attributes 00:28:47.188 ============================ 00:28:47.188 Security Send/Receive: Not Supported 00:28:47.188 Format NVM: Not Supported 00:28:47.188 Firmware Activate/Download: Not Supported 00:28:47.188 Namespace Management: Not Supported 00:28:47.188 Device Self-Test: Not Supported 00:28:47.188 Directives: Not Supported 00:28:47.188 NVMe-MI: Not Supported 00:28:47.188 Virtualization Management: Not Supported 00:28:47.188 Doorbell Buffer Config: Not Supported 00:28:47.188 Get LBA Status Capability: Not Supported 00:28:47.188 Command & Feature Lockdown Capability: Not Supported 00:28:47.188 Abort Command Limit: 4 00:28:47.188 Async Event Request Limit: 4 00:28:47.188 Number of Firmware Slots: N/A 00:28:47.188 Firmware Slot 1 Read-Only: N/A 00:28:47.188 Firmware Activation Without Reset: N/A 00:28:47.188 Multiple Update Detection Support: N/A 00:28:47.188 Firmware Update Granularity: No Information Provided 00:28:47.188 Per-Namespace SMART Log: No 00:28:47.188 Asymmetric Namespace Access Log Page: Not Supported 00:28:47.188 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:47.188 Command Effects Log Page: Supported 00:28:47.188 Get Log Page Extended Data: Supported 00:28:47.188 Telemetry Log Pages: Not Supported 00:28:47.188 Persistent Event Log Pages: Not Supported 00:28:47.188 Supported Log Pages Log Page: May Support 00:28:47.188 Commands Supported & Effects Log Page: Not Supported 00:28:47.188 Feature Identifiers & Effects Log Page:May Support 00:28:47.188 NVMe-MI Commands & Effects Log Page: May Support 00:28:47.188 Data Area 4 for Telemetry Log: Not Supported 00:28:47.188 Error Log Page Entries Supported: 128 00:28:47.188 Keep Alive: Supported 00:28:47.188 Keep Alive Granularity: 10000 ms 00:28:47.188 00:28:47.188 NVM Command Set Attributes 00:28:47.188 ========================== 00:28:47.188 Submission Queue Entry Size 00:28:47.188 Max: 64 00:28:47.188 Min: 64 00:28:47.188 Completion Queue Entry Size 00:28:47.188 Max: 16 00:28:47.188 Min: 16 00:28:47.188 Number of Namespaces: 32 00:28:47.188 Compare Command: Supported 00:28:47.188 Write Uncorrectable Command: Not Supported 00:28:47.188 Dataset Management Command: Supported 00:28:47.188 Write Zeroes Command: Supported 00:28:47.188 Set Features Save Field: Not Supported 00:28:47.188 Reservations: Supported 00:28:47.188 Timestamp: Not Supported 00:28:47.188 Copy: Supported 00:28:47.188 Volatile Write Cache: Present 00:28:47.188 Atomic Write Unit (Normal): 1 00:28:47.188 Atomic Write Unit (PFail): 1 00:28:47.188 Atomic Compare & Write Unit: 1 00:28:47.189 Fused Compare & Write: Supported 00:28:47.189 Scatter-Gather List 00:28:47.189 SGL Command Set: Supported 00:28:47.189 SGL Keyed: Supported 00:28:47.189 SGL Bit Bucket Descriptor: Not Supported 00:28:47.189 SGL Metadata Pointer: Not Supported 00:28:47.189 Oversized SGL: Not Supported 00:28:47.189 SGL Metadata Address: Not Supported 00:28:47.189 SGL Offset: Supported 00:28:47.189 Transport SGL Data Block: Not Supported 00:28:47.189 Replay Protected Memory Block: Not Supported 00:28:47.189 00:28:47.189 Firmware Slot Information 00:28:47.189 ========================= 00:28:47.189 Active slot: 1 00:28:47.189 Slot 1 Firmware Revision: 25.01 00:28:47.189 00:28:47.189 00:28:47.189 Commands Supported and Effects 00:28:47.189 ============================== 00:28:47.189 Admin Commands 00:28:47.189 -------------- 00:28:47.189 Get Log Page (02h): Supported 00:28:47.189 Identify (06h): Supported 00:28:47.189 Abort (08h): Supported 00:28:47.189 Set Features (09h): Supported 00:28:47.189 Get Features (0Ah): Supported 00:28:47.189 Asynchronous Event Request (0Ch): Supported 00:28:47.189 Keep Alive (18h): Supported 00:28:47.189 I/O Commands 00:28:47.189 ------------ 00:28:47.189 Flush (00h): Supported LBA-Change 00:28:47.189 Write (01h): Supported LBA-Change 00:28:47.189 Read (02h): Supported 00:28:47.189 Compare (05h): Supported 00:28:47.189 Write Zeroes (08h): Supported LBA-Change 00:28:47.189 Dataset Management (09h): Supported LBA-Change 00:28:47.189 Copy (19h): Supported LBA-Change 00:28:47.189 00:28:47.189 Error Log 00:28:47.189 ========= 00:28:47.189 00:28:47.189 Arbitration 00:28:47.189 =========== 00:28:47.189 Arbitration Burst: 1 00:28:47.189 00:28:47.189 Power Management 00:28:47.189 ================ 00:28:47.189 Number of Power States: 1 00:28:47.189 Current Power State: Power State #0 00:28:47.189 Power State #0: 00:28:47.189 Max Power: 0.00 W 00:28:47.189 Non-Operational State: Operational 00:28:47.189 Entry Latency: Not Reported 00:28:47.189 Exit Latency: Not Reported 00:28:47.189 Relative Read Throughput: 0 00:28:47.189 Relative Read Latency: 0 00:28:47.189 Relative Write Throughput: 0 00:28:47.189 Relative Write Latency: 0 00:28:47.189 Idle Power: Not Reported 00:28:47.189 Active Power: Not Reported 00:28:47.189 Non-Operational Permissive Mode: Not Supported 00:28:47.189 00:28:47.189 Health Information 00:28:47.189 ================== 00:28:47.189 Critical Warnings: 00:28:47.189 Available Spare Space: OK 00:28:47.189 Temperature: OK 00:28:47.189 Device Reliability: OK 00:28:47.189 Read Only: No 00:28:47.189 Volatile Memory Backup: OK 00:28:47.189 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:47.189 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:47.189 Available Spare: 0% 00:28:47.189 Available Spare Threshold: 0% 00:28:47.189 Life Percentage Used:[2024-11-20 06:40:07.068819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.068827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xafa550) 00:28:47.189 [2024-11-20 06:40:07.068835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.189 [2024-11-20 06:40:07.068849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5cb80, cid 7, qid 0 00:28:47.189 [2024-11-20 06:40:07.069041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.189 [2024-11-20 06:40:07.069048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.189 [2024-11-20 06:40:07.069051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5cb80) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069090] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:28:47.189 [2024-11-20 06:40:07.069101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c100) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.189 [2024-11-20 06:40:07.069113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c280) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.189 [2024-11-20 06:40:07.069122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c400) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.189 [2024-11-20 06:40:07.069132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.189 [2024-11-20 06:40:07.069145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.189 [2024-11-20 06:40:07.069166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.189 [2024-11-20 06:40:07.069178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.189 [2024-11-20 06:40:07.069396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.189 [2024-11-20 06:40:07.069403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.189 [2024-11-20 06:40:07.069406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.189 [2024-11-20 06:40:07.069431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.189 [2024-11-20 06:40:07.069444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.189 [2024-11-20 06:40:07.069649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.189 [2024-11-20 06:40:07.069655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.189 [2024-11-20 06:40:07.069658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069667] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:28:47.189 [2024-11-20 06:40:07.069673] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:28:47.189 [2024-11-20 06:40:07.069682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.189 [2024-11-20 06:40:07.069696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.189 [2024-11-20 06:40:07.069707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.189 [2024-11-20 06:40:07.069949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.189 [2024-11-20 06:40:07.069956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.189 [2024-11-20 06:40:07.069960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.069974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.069981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.189 [2024-11-20 06:40:07.069988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.189 [2024-11-20 06:40:07.069999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.189 [2024-11-20 06:40:07.070169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.189 [2024-11-20 06:40:07.070175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.189 [2024-11-20 06:40:07.070179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.070183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.070192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.070199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.070202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.189 [2024-11-20 06:40:07.070209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.189 [2024-11-20 06:40:07.070220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.189 [2024-11-20 06:40:07.070453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.189 [2024-11-20 06:40:07.070459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.189 [2024-11-20 06:40:07.070463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.070467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.189 [2024-11-20 06:40:07.070477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.070481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.189 [2024-11-20 06:40:07.070485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.070491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.070502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.070705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.070711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.070715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.070718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.070728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.070732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.070736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.070742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.070759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.070957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.070964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.070967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.070971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.070981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.070984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.070988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.070995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.071005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.071214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.071220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.071224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.071238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.071254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.071264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.071511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.071517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.071520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.071534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.071548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.071558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.071817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.071823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.071827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.071840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.071848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.071855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.071865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.072066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.072072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.072076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.072089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.072103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.072114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.072315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.072321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.072324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.072338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.072354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.072364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.072569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.072575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.072579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.072593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.072600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.072607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.072617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.076755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.076763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.076767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.076771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.076781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.076785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.076788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafa550) 00:28:47.190 [2024-11-20 06:40:07.076795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.190 [2024-11-20 06:40:07.076807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb5c580, cid 3, qid 0 00:28:47.190 [2024-11-20 06:40:07.077041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:47.190 [2024-11-20 06:40:07.077047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:47.190 [2024-11-20 06:40:07.077051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:47.190 [2024-11-20 06:40:07.077055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb5c580) on tqpair=0xafa550 00:28:47.190 [2024-11-20 06:40:07.077063] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:28:47.190 0% 00:28:47.190 Data Units Read: 0 00:28:47.190 Data Units Written: 0 00:28:47.190 Host Read Commands: 0 00:28:47.190 Host Write Commands: 0 00:28:47.190 Controller Busy Time: 0 minutes 00:28:47.190 Power Cycles: 0 00:28:47.190 Power On Hours: 0 hours 00:28:47.190 Unsafe Shutdowns: 0 00:28:47.190 Unrecoverable Media Errors: 0 00:28:47.190 Lifetime Error Log Entries: 0 00:28:47.190 Warning Temperature Time: 0 minutes 00:28:47.190 Critical Temperature Time: 0 minutes 00:28:47.190 00:28:47.190 Number of Queues 00:28:47.190 ================ 00:28:47.190 Number of I/O Submission Queues: 127 00:28:47.190 Number of I/O Completion Queues: 127 00:28:47.190 00:28:47.190 Active Namespaces 00:28:47.190 ================= 00:28:47.190 Namespace ID:1 00:28:47.190 Error Recovery Timeout: Unlimited 00:28:47.190 Command Set Identifier: NVM (00h) 00:28:47.190 Deallocate: Supported 00:28:47.190 Deallocated/Unwritten Error: Not Supported 00:28:47.190 Deallocated Read Value: Unknown 00:28:47.190 Deallocate in Write Zeroes: Not Supported 00:28:47.190 Deallocated Guard Field: 0xFFFF 00:28:47.190 Flush: Supported 00:28:47.190 Reservation: Supported 00:28:47.190 Namespace Sharing Capabilities: Multiple Controllers 00:28:47.190 Size (in LBAs): 131072 (0GiB) 00:28:47.190 Capacity (in LBAs): 131072 (0GiB) 00:28:47.190 Utilization (in LBAs): 131072 (0GiB) 00:28:47.190 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:47.190 EUI64: ABCDEF0123456789 00:28:47.191 UUID: 93c053d3-6c80-4f60-a01f-df9d0e41c21d 00:28:47.191 Thin Provisioning: Not Supported 00:28:47.191 Per-NS Atomic Units: Yes 00:28:47.191 Atomic Boundary Size (Normal): 0 00:28:47.191 Atomic Boundary Size (PFail): 0 00:28:47.191 Atomic Boundary Offset: 0 00:28:47.191 Maximum Single Source Range Length: 65535 00:28:47.191 Maximum Copy Length: 65535 00:28:47.191 Maximum Source Range Count: 1 00:28:47.191 NGUID/EUI64 Never Reused: No 00:28:47.191 Namespace Write Protected: No 00:28:47.191 Number of LBA Formats: 1 00:28:47.191 Current LBA Format: LBA Format #00 00:28:47.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:47.191 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.453 rmmod nvme_tcp 00:28:47.453 rmmod nvme_fabrics 00:28:47.453 rmmod nvme_keyring 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2809866 ']' 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2809866 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2809866 ']' 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2809866 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2809866 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2809866' 00:28:47.453 killing process with pid 2809866 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2809866 00:28:47.453 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2809866 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.713 06:40:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.624 06:40:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.624 00:28:49.624 real 0m11.836s 00:28:49.624 user 0m8.601s 00:28:49.624 sys 0m6.252s 00:28:49.624 06:40:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:49.624 06:40:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:49.624 ************************************ 00:28:49.624 END TEST nvmf_identify 00:28:49.624 ************************************ 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.885 ************************************ 00:28:49.885 START TEST nvmf_perf 00:28:49.885 ************************************ 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:49.885 * Looking for test storage... 00:28:49.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.885 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:49.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.886 --rc genhtml_branch_coverage=1 00:28:49.886 --rc genhtml_function_coverage=1 00:28:49.886 --rc genhtml_legend=1 00:28:49.886 --rc geninfo_all_blocks=1 00:28:49.886 --rc geninfo_unexecuted_blocks=1 00:28:49.886 00:28:49.886 ' 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:49.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.886 --rc genhtml_branch_coverage=1 00:28:49.886 --rc genhtml_function_coverage=1 00:28:49.886 --rc genhtml_legend=1 00:28:49.886 --rc geninfo_all_blocks=1 00:28:49.886 --rc geninfo_unexecuted_blocks=1 00:28:49.886 00:28:49.886 ' 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:49.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.886 --rc genhtml_branch_coverage=1 00:28:49.886 --rc genhtml_function_coverage=1 00:28:49.886 --rc genhtml_legend=1 00:28:49.886 --rc geninfo_all_blocks=1 00:28:49.886 --rc geninfo_unexecuted_blocks=1 00:28:49.886 00:28:49.886 ' 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:49.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.886 --rc genhtml_branch_coverage=1 00:28:49.886 --rc genhtml_function_coverage=1 00:28:49.886 --rc genhtml_legend=1 00:28:49.886 --rc geninfo_all_blocks=1 00:28:49.886 --rc geninfo_unexecuted_blocks=1 00:28:49.886 00:28:49.886 ' 00:28:49.886 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.147 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:50.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.148 06:40:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.294 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:58.295 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:58.295 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:58.295 Found net devices under 0000:31:00.0: cvl_0_0 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:58.295 Found net devices under 0000:31:00.1: cvl_0_1 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:28:58.295 00:28:58.295 --- 10.0.0.2 ping statistics --- 00:28:58.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.295 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:28:58.295 00:28:58.295 --- 10.0.0.1 ping statistics --- 00:28:58.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.295 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2814430 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2814430 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2814430 ']' 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:58.295 06:40:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.295 [2024-11-20 06:40:17.607991] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:58.295 [2024-11-20 06:40:17.608081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.295 [2024-11-20 06:40:17.708811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.295 [2024-11-20 06:40:17.761914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.295 [2024-11-20 06:40:17.761965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.296 [2024-11-20 06:40:17.761974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.296 [2024-11-20 06:40:17.761981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.296 [2024-11-20 06:40:17.761988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.296 [2024-11-20 06:40:17.764458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.296 [2024-11-20 06:40:17.764613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.296 [2024-11-20 06:40:17.764800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.296 [2024-11-20 06:40:17.764831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.557 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.557 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:28:58.557 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.557 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.557 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:58.819 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.819 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:58.819 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:59.081 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:59.343 06:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:59.343 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:28:59.343 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:59.605 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:59.605 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:28:59.605 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:59.605 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:59.605 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:59.866 [2024-11-20 06:40:19.596515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.866 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.127 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:00.127 06:40:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.127 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:00.127 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:00.388 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.650 [2024-11-20 06:40:20.408373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.650 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:00.910 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:29:00.910 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:00.910 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:00.910 06:40:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:02.297 Initializing NVMe Controllers 00:29:02.297 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:29:02.297 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:29:02.297 Initialization complete. Launching workers. 00:29:02.297 ======================================================== 00:29:02.297 Latency(us) 00:29:02.297 Device Information : IOPS MiB/s Average min max 00:29:02.297 PCIE (0000:65:00.0) NSID 1 from core 0: 78831.26 307.93 405.23 13.23 4769.27 00:29:02.297 ======================================================== 00:29:02.297 Total : 78831.26 307.93 405.23 13.23 4769.27 00:29:02.297 00:29:02.297 06:40:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:03.693 Initializing NVMe Controllers 00:29:03.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:03.693 Initialization complete. Launching workers. 00:29:03.693 ======================================================== 00:29:03.693 Latency(us) 00:29:03.693 Device Information : IOPS MiB/s Average min max 00:29:03.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 107.00 0.42 9643.38 89.76 45591.84 00:29:03.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19717.66 7954.60 47890.29 00:29:03.693 ======================================================== 00:29:03.693 Total : 158.00 0.62 12895.20 89.76 47890.29 00:29:03.693 00:29:03.693 06:40:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.634 Initializing NVMe Controllers 00:29:04.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:04.634 Initialization complete. Launching workers. 00:29:04.634 ======================================================== 00:29:04.634 Latency(us) 00:29:04.634 Device Information : IOPS MiB/s Average min max 00:29:04.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11933.00 46.61 2683.95 451.19 6275.39 00:29:04.634 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3810.00 14.88 8459.39 6434.63 15996.15 00:29:04.634 ======================================================== 00:29:04.634 Total : 15743.00 61.50 4081.67 451.19 15996.15 00:29:04.634 00:29:04.634 06:40:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:04.634 06:40:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:04.634 06:40:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.178 Initializing NVMe Controllers 00:29:07.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.178 Controller IO queue size 128, less than required. 00:29:07.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.178 Controller IO queue size 128, less than required. 00:29:07.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:07.178 Initialization complete. Launching workers. 00:29:07.178 ======================================================== 00:29:07.178 Latency(us) 00:29:07.178 Device Information : IOPS MiB/s Average min max 00:29:07.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1985.83 496.46 65454.62 33267.88 114178.60 00:29:07.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.45 152.36 216295.17 71824.00 339313.85 00:29:07.178 ======================================================== 00:29:07.178 Total : 2595.28 648.82 100876.43 33267.88 339313.85 00:29:07.178 00:29:07.178 06:40:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:07.178 No valid NVMe controllers or AIO or URING devices found 00:29:07.178 Initializing NVMe Controllers 00:29:07.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.178 Controller IO queue size 128, less than required. 00:29:07.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.178 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:07.178 Controller IO queue size 128, less than required. 00:29:07.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:07.178 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:07.178 WARNING: Some requested NVMe devices were skipped 00:29:07.178 06:40:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:09.722 Initializing NVMe Controllers 00:29:09.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:09.723 Controller IO queue size 128, less than required. 00:29:09.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:09.723 Controller IO queue size 128, less than required. 00:29:09.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:09.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:09.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:09.723 Initialization complete. Launching workers. 00:29:09.723 00:29:09.723 ==================== 00:29:09.723 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:09.723 TCP transport: 00:29:09.723 polls: 41372 00:29:09.723 idle_polls: 26890 00:29:09.723 sock_completions: 14482 00:29:09.723 nvme_completions: 7219 00:29:09.723 submitted_requests: 10892 00:29:09.723 queued_requests: 1 00:29:09.723 00:29:09.723 ==================== 00:29:09.723 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:09.723 TCP transport: 00:29:09.723 polls: 42418 00:29:09.723 idle_polls: 26952 00:29:09.723 sock_completions: 15466 00:29:09.723 nvme_completions: 7313 00:29:09.723 submitted_requests: 10942 00:29:09.723 queued_requests: 1 00:29:09.723 ======================================================== 00:29:09.723 Latency(us) 00:29:09.723 Device Information : IOPS MiB/s Average min max 00:29:09.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1804.48 451.12 71806.08 34486.10 132689.71 00:29:09.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1827.98 457.00 70815.35 30874.38 116791.08 00:29:09.723 ======================================================== 00:29:09.723 Total : 3632.46 908.12 71307.51 30874.38 132689.71 00:29:09.723 00:29:09.723 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:09.723 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.983 rmmod nvme_tcp 00:29:09.983 rmmod nvme_fabrics 00:29:09.983 rmmod nvme_keyring 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2814430 ']' 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2814430 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2814430 ']' 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2814430 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2814430 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2814430' 00:29:09.983 killing process with pid 2814430 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2814430 00:29:09.983 06:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2814430 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.895 06:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.441 00:29:14.441 real 0m24.268s 00:29:14.441 user 0m57.781s 00:29:14.441 sys 0m8.764s 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:14.441 ************************************ 00:29:14.441 END TEST nvmf_perf 00:29:14.441 ************************************ 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.441 ************************************ 00:29:14.441 START TEST nvmf_fio_host 00:29:14.441 ************************************ 00:29:14.441 06:40:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:14.441 * Looking for test storage... 00:29:14.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.441 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.442 --rc genhtml_branch_coverage=1 00:29:14.442 --rc genhtml_function_coverage=1 00:29:14.442 --rc genhtml_legend=1 00:29:14.442 --rc geninfo_all_blocks=1 00:29:14.442 --rc geninfo_unexecuted_blocks=1 00:29:14.442 00:29:14.442 ' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.442 --rc genhtml_branch_coverage=1 00:29:14.442 --rc genhtml_function_coverage=1 00:29:14.442 --rc genhtml_legend=1 00:29:14.442 --rc geninfo_all_blocks=1 00:29:14.442 --rc geninfo_unexecuted_blocks=1 00:29:14.442 00:29:14.442 ' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.442 --rc genhtml_branch_coverage=1 00:29:14.442 --rc genhtml_function_coverage=1 00:29:14.442 --rc genhtml_legend=1 00:29:14.442 --rc geninfo_all_blocks=1 00:29:14.442 --rc geninfo_unexecuted_blocks=1 00:29:14.442 00:29:14.442 ' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:14.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.442 --rc genhtml_branch_coverage=1 00:29:14.442 --rc genhtml_function_coverage=1 00:29:14.442 --rc genhtml_legend=1 00:29:14.442 --rc geninfo_all_blocks=1 00:29:14.442 --rc geninfo_unexecuted_blocks=1 00:29:14.442 00:29:14.442 ' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.442 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.443 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.443 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.443 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.443 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.443 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.443 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.443 06:40:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:22.588 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:22.588 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:22.588 Found net devices under 0000:31:00.0: cvl_0_0 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:22.588 Found net devices under 0000:31:00.1: cvl_0_1 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.588 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:29:22.589 00:29:22.589 --- 10.0.0.2 ping statistics --- 00:29:22.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.589 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:29:22.589 00:29:22.589 --- 10.0.0.1 ping statistics --- 00:29:22.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.589 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2821361 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2821361 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2821361 ']' 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:22.589 06:40:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.589 [2024-11-20 06:40:41.882353] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:22.589 [2024-11-20 06:40:41.882419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.589 [2024-11-20 06:40:41.983677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.589 [2024-11-20 06:40:42.036200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.589 [2024-11-20 06:40:42.036251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.589 [2024-11-20 06:40:42.036260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.589 [2024-11-20 06:40:42.036267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.589 [2024-11-20 06:40:42.036274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.589 [2024-11-20 06:40:42.038708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.589 [2024-11-20 06:40:42.038878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.589 [2024-11-20 06:40:42.038927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.589 [2024-11-20 06:40:42.038928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.850 06:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:22.850 06:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:29:22.850 06:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:23.111 [2024-11-20 06:40:42.857238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.111 06:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:23.111 06:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:23.111 06:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.111 06:40:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:23.372 Malloc1 00:29:23.372 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.633 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:23.894 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.894 [2024-11-20 06:40:43.724215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.894 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:24.155 06:40:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:24.416 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:24.416 fio-3.35 00:29:24.416 Starting 1 thread 00:29:26.958 00:29:26.958 test: (groupid=0, jobs=1): err= 0: pid=2822207: Wed Nov 20 06:40:46 2024 00:29:26.958 read: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(106MiB/2004msec) 00:29:26.958 slat (usec): min=2, max=256, avg= 2.15, stdev= 2.20 00:29:26.958 clat (usec): min=2861, max=8946, avg=5209.03, stdev=377.81 00:29:26.958 lat (usec): min=2892, max=8949, avg=5211.18, stdev=377.91 00:29:26.958 clat percentiles (usec): 00:29:26.958 | 1.00th=[ 4228], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4948], 00:29:26.958 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5276], 00:29:26.958 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5735], 00:29:26.958 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 7898], 99.95th=[ 8225], 00:29:26.958 | 99.99th=[ 8848] 00:29:26.958 bw ( KiB/s): min=52640, max=54440, per=99.92%, avg=53870.00, stdev=828.41, samples=4 00:29:26.958 iops : min=13160, max=13610, avg=13467.50, stdev=207.10, samples=4 00:29:26.958 write: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(105MiB/2004msec); 0 zone resets 00:29:26.958 slat (usec): min=2, max=245, avg= 2.21, stdev= 1.64 00:29:26.958 clat (usec): min=2626, max=8144, avg=4230.96, stdev=311.84 00:29:26.958 lat (usec): min=2642, max=8147, avg=4233.17, stdev=312.00 00:29:26.958 clat percentiles (usec): 00:29:26.958 | 1.00th=[ 3490], 5.00th=[ 3752], 10.00th=[ 3884], 20.00th=[ 4015], 00:29:26.958 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:29:26.958 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4686], 00:29:26.958 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 6521], 99.95th=[ 6980], 00:29:26.958 | 99.99th=[ 8029] 00:29:26.958 bw ( KiB/s): min=52944, max=54400, per=100.00%, avg=53868.00, stdev=642.58, samples=4 00:29:26.958 iops : min=13236, max=13600, avg=13467.00, stdev=160.64, samples=4 00:29:26.958 lat (msec) : 4=10.10%, 10=89.90% 00:29:26.958 cpu : usr=74.64%, sys=24.16%, ctx=19, majf=0, minf=17 00:29:26.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:26.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:26.958 issued rwts: total=27010,26986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:26.958 00:29:26.958 Run status group 0 (all jobs): 00:29:26.958 READ: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=106MiB (111MB), run=2004-2004msec 00:29:26.958 WRITE: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=105MiB (111MB), run=2004-2004msec 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:26.958 06:40:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:27.533 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:27.533 fio-3.35 00:29:27.533 Starting 1 thread 00:29:29.581 00:29:29.581 test: (groupid=0, jobs=1): err= 0: pid=2822718: Wed Nov 20 06:40:49 2024 00:29:29.581 read: IOPS=9638, BW=151MiB/s (158MB/s)(302MiB/2008msec) 00:29:29.581 slat (usec): min=3, max=113, avg= 3.60, stdev= 1.56 00:29:29.581 clat (usec): min=1214, max=15806, avg=8018.05, stdev=1818.46 00:29:29.581 lat (usec): min=1218, max=15809, avg=8021.65, stdev=1818.59 00:29:29.581 clat percentiles (usec): 00:29:29.581 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6325], 00:29:29.581 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8455], 00:29:29.581 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:29:29.581 | 99.00th=[11994], 99.50th=[12518], 99.90th=[14615], 99.95th=[15008], 00:29:29.581 | 99.99th=[15795] 00:29:29.581 bw ( KiB/s): min=71168, max=84576, per=49.99%, avg=77096.00, stdev=5549.66, samples=4 00:29:29.581 iops : min= 4448, max= 5286, avg=4818.50, stdev=346.85, samples=4 00:29:29.581 write: IOPS=5672, BW=88.6MiB/s (92.9MB/s)(157MiB/1776msec); 0 zone resets 00:29:29.581 slat (usec): min=39, max=359, avg=40.82, stdev= 6.82 00:29:29.581 clat (usec): min=1900, max=15225, avg=9149.15, stdev=1305.77 00:29:29.581 lat (usec): min=1940, max=15265, avg=9189.97, stdev=1307.23 00:29:29.581 clat percentiles (usec): 00:29:29.581 | 1.00th=[ 6456], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8029], 00:29:29.581 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:29:29.581 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:29:29.581 | 99.00th=[12256], 99.50th=[13304], 99.90th=[14484], 99.95th=[15008], 00:29:29.581 | 99.99th=[15139] 00:29:29.581 bw ( KiB/s): min=73184, max=88160, per=88.49%, avg=80320.00, stdev=6127.48, samples=4 00:29:29.581 iops : min= 4574, max= 5510, avg=5020.00, stdev=382.97, samples=4 00:29:29.581 lat (msec) : 2=0.02%, 4=0.44%, 10=80.46%, 20=19.09% 00:29:29.581 cpu : usr=85.95%, sys=12.95%, ctx=15, majf=0, minf=35 00:29:29.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:29.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.581 issued rwts: total=19355,10075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.581 00:29:29.581 Run status group 0 (all jobs): 00:29:29.581 READ: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=302MiB (317MB), run=2008-2008msec 00:29:29.581 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=157MiB (165MB), run=1776-1776msec 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.842 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.842 rmmod nvme_tcp 00:29:29.842 rmmod nvme_fabrics 00:29:29.842 rmmod nvme_keyring 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2821361 ']' 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2821361 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2821361 ']' 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2821361 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2821361 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2821361' 00:29:30.103 killing process with pid 2821361 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2821361 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2821361 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.103 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.104 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.104 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.104 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.104 06:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.650 00:29:32.650 real 0m18.092s 00:29:32.650 user 1m1.487s 00:29:32.650 sys 0m7.687s 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.650 ************************************ 00:29:32.650 END TEST nvmf_fio_host 00:29:32.650 ************************************ 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.650 ************************************ 00:29:32.650 START TEST nvmf_failover 00:29:32.650 ************************************ 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:32.650 * Looking for test storage... 00:29:32.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.650 --rc genhtml_branch_coverage=1 00:29:32.650 --rc genhtml_function_coverage=1 00:29:32.650 --rc genhtml_legend=1 00:29:32.650 --rc geninfo_all_blocks=1 00:29:32.650 --rc geninfo_unexecuted_blocks=1 00:29:32.650 00:29:32.650 ' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.650 --rc genhtml_branch_coverage=1 00:29:32.650 --rc genhtml_function_coverage=1 00:29:32.650 --rc genhtml_legend=1 00:29:32.650 --rc geninfo_all_blocks=1 00:29:32.650 --rc geninfo_unexecuted_blocks=1 00:29:32.650 00:29:32.650 ' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.650 --rc genhtml_branch_coverage=1 00:29:32.650 --rc genhtml_function_coverage=1 00:29:32.650 --rc genhtml_legend=1 00:29:32.650 --rc geninfo_all_blocks=1 00:29:32.650 --rc geninfo_unexecuted_blocks=1 00:29:32.650 00:29:32.650 ' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:32.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.650 --rc genhtml_branch_coverage=1 00:29:32.650 --rc genhtml_function_coverage=1 00:29:32.650 --rc genhtml_legend=1 00:29:32.650 --rc geninfo_all_blocks=1 00:29:32.650 --rc geninfo_unexecuted_blocks=1 00:29:32.650 00:29:32.650 ' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.650 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:32.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.651 06:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:40.796 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:40.796 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:40.796 Found net devices under 0000:31:00.0: cvl_0_0 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:40.796 Found net devices under 0000:31:00.1: cvl_0_1 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.796 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:29:40.797 00:29:40.797 --- 10.0.0.2 ping statistics --- 00:29:40.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.797 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:29:40.797 00:29:40.797 --- 10.0.0.1 ping statistics --- 00:29:40.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.797 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.797 06:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2827428 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2827428 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2827428 ']' 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.797 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:40.797 [2024-11-20 06:41:00.074822] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:40.797 [2024-11-20 06:41:00.074894] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.797 [2024-11-20 06:41:00.177117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:40.797 [2024-11-20 06:41:00.230072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.797 [2024-11-20 06:41:00.230125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.797 [2024-11-20 06:41:00.230133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.797 [2024-11-20 06:41:00.230140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.797 [2024-11-20 06:41:00.230146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.797 [2024-11-20 06:41:00.232009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.797 [2024-11-20 06:41:00.232171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.797 [2024-11-20 06:41:00.232171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.059 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:41.059 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:29:41.059 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.059 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:41.059 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:41.059 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.059 06:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:41.320 [2024-11-20 06:41:01.091590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.320 06:41:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:41.581 Malloc0 00:29:41.581 06:41:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.842 06:41:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:41.842 06:41:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.103 [2024-11-20 06:41:01.896699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.103 06:41:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:42.365 [2024-11-20 06:41:02.101344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:42.365 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:42.626 [2024-11-20 06:41:02.305963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2828017 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2828017 /var/tmp/bdevperf.sock 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2828017 ']' 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:42.626 06:41:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:43.567 06:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:43.567 06:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:29:43.567 06:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:43.827 NVMe0n1 00:29:43.827 06:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:44.088 00:29:44.088 06:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2828251 00:29:44.088 06:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:44.088 06:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:45.030 06:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.291 [2024-11-20 06:41:05.000346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 [2024-11-20 06:41:05.000671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e46d0 is same with the state(6) to be set 00:29:45.291 06:41:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:48.592 06:41:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:48.592 00:29:48.592 06:41:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:48.853 [2024-11-20 06:41:08.569684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 [2024-11-20 06:41:08.569871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e5520 is same with the state(6) to be set 00:29:48.853 06:41:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:52.166 06:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.166 [2024-11-20 06:41:11.761678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.166 06:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:53.110 06:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:53.110 [2024-11-20 06:41:12.948508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.110 [2024-11-20 06:41:12.948679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.948996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.949000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.949005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.111 [2024-11-20 06:41:12.949009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 [2024-11-20 06:41:12.949128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22307a0 is same with the state(6) to be set 00:29:53.112 06:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2828251 00:29:59.707 { 00:29:59.707 "results": [ 00:29:59.707 { 00:29:59.707 "job": "NVMe0n1", 00:29:59.707 "core_mask": "0x1", 00:29:59.707 "workload": "verify", 00:29:59.707 "status": "finished", 00:29:59.707 "verify_range": { 00:29:59.707 "start": 0, 00:29:59.707 "length": 16384 00:29:59.707 }, 00:29:59.707 "queue_depth": 128, 00:29:59.707 "io_size": 4096, 00:29:59.707 "runtime": 15.004591, 00:29:59.707 "iops": 12492.243207428979, 00:29:59.707 "mibps": 48.79782502901945, 00:29:59.707 "io_failed": 6949, 00:29:59.707 "io_timeout": 0, 00:29:59.707 "avg_latency_us": 9859.26782900355, 00:29:59.707 "min_latency_us": 532.48, 00:29:59.707 "max_latency_us": 33641.81333333333 00:29:59.707 } 00:29:59.707 ], 00:29:59.707 "core_count": 1 00:29:59.707 } 00:29:59.707 06:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2828017 00:29:59.707 06:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2828017 ']' 00:29:59.707 06:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2828017 00:29:59.707 06:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:29:59.707 06:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:59.707 06:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2828017 00:29:59.707 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:59.707 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:59.707 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2828017' 00:29:59.707 killing process with pid 2828017 00:29:59.707 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2828017 00:29:59.707 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2828017 00:29:59.707 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:59.707 [2024-11-20 06:41:02.387408] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:59.707 [2024-11-20 06:41:02.387470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828017 ] 00:29:59.707 [2024-11-20 06:41:02.476473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.707 [2024-11-20 06:41:02.512118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.707 Running I/O for 15 seconds... 00:29:59.707 11106.00 IOPS, 43.38 MiB/s [2024-11-20T05:41:19.627Z] [2024-11-20 06:41:05.001006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.707 [2024-11-20 06:41:05.001315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.707 [2024-11-20 06:41:05.001322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.708 [2024-11-20 06:41:05.001985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.708 [2024-11-20 06:41:05.001994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.709 [2024-11-20 06:41:05.002637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.709 [2024-11-20 06:41:05.002646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.002988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.002995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.710 [2024-11-20 06:41:05.003012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.710 [2024-11-20 06:41:05.003176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d28c0 is same with the state(6) to be set 00:29:59.710 [2024-11-20 06:41:05.003194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.710 [2024-11-20 06:41:05.003200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.710 [2024-11-20 06:41:05.003207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:29:59.710 [2024-11-20 06:41:05.003215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003260] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:59.710 [2024-11-20 06:41:05.003283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.710 [2024-11-20 06:41:05.003291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.710 [2024-11-20 06:41:05.003307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.710 [2024-11-20 06:41:05.003322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.710 [2024-11-20 06:41:05.003338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.710 [2024-11-20 06:41:05.003346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:59.710 [2024-11-20 06:41:05.006957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:59.710 [2024-11-20 06:41:05.006980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1fc0 (9): Bad file descriptor 00:29:59.710 [2024-11-20 06:41:05.035478] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:59.710 11064.00 IOPS, 43.22 MiB/s [2024-11-20T05:41:19.631Z] 11087.33 IOPS, 43.31 MiB/s [2024-11-20T05:41:19.631Z] 11428.00 IOPS, 44.64 MiB/s [2024-11-20T05:41:19.631Z] [2024-11-20 06:41:08.570681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.711 [2024-11-20 06:41:08.570928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.570940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.570951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.570963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.570974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.570985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.570992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.570997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.711 [2024-11-20 06:41:08.571096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.711 [2024-11-20 06:41:08.571102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.712 [2024-11-20 06:41:08.571309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.712 [2024-11-20 06:41:08.571320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.712 [2024-11-20 06:41:08.571333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.712 [2024-11-20 06:41:08.571344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.712 [2024-11-20 06:41:08.571355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.712 [2024-11-20 06:41:08.571367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.712 [2024-11-20 06:41:08.571378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.712 [2024-11-20 06:41:08.571556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.712 [2024-11-20 06:41:08.571561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.713 [2024-11-20 06:41:08.571697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47712 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47720 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47728 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47736 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47744 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47752 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47760 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47768 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47776 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47784 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47792 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47800 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47808 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47816 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47824 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.571986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.571991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.571995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.713 [2024-11-20 06:41:08.571999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47832 len:8 PRP1 0x0 PRP2 0x0 00:29:59.713 [2024-11-20 06:41:08.572004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.713 [2024-11-20 06:41:08.572009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.713 [2024-11-20 06:41:08.572013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47840 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47848 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47856 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47864 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47872 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47880 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47888 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47896 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47904 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47912 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47920 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.572211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.572216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.572220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.572225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47928 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47936 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47944 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47952 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47960 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47968 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47976 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47984 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47168 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47176 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47184 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47192 len:8 PRP1 0x0 PRP2 0x0 00:29:59.714 [2024-11-20 06:41:08.582763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.714 [2024-11-20 06:41:08.582768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.714 [2024-11-20 06:41:08.582772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.714 [2024-11-20 06:41:08.582776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47200 len:8 PRP1 0x0 PRP2 0x0 00:29:59.715 [2024-11-20 06:41:08.582781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.715 [2024-11-20 06:41:08.582790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.715 [2024-11-20 06:41:08.582794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47208 len:8 PRP1 0x0 PRP2 0x0 00:29:59.715 [2024-11-20 06:41:08.582799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.715 [2024-11-20 06:41:08.582809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.715 [2024-11-20 06:41:08.582813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47216 len:8 PRP1 0x0 PRP2 0x0 00:29:59.715 [2024-11-20 06:41:08.582820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.715 [2024-11-20 06:41:08.582829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.715 [2024-11-20 06:41:08.582833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47224 len:8 PRP1 0x0 PRP2 0x0 00:29:59.715 [2024-11-20 06:41:08.582838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582873] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:59.715 [2024-11-20 06:41:08.582897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.715 [2024-11-20 06:41:08.582905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.715 [2024-11-20 06:41:08.582916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.715 [2024-11-20 06:41:08.582928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.715 [2024-11-20 06:41:08.582938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:08.582944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:29:59.715 [2024-11-20 06:41:08.582975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1fc0 (9): Bad file descriptor 00:29:59.715 [2024-11-20 06:41:08.585426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:29:59.715 [2024-11-20 06:41:08.621707] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:29:59.715 11562.60 IOPS, 45.17 MiB/s [2024-11-20T05:41:19.635Z] 11782.83 IOPS, 46.03 MiB/s [2024-11-20T05:41:19.635Z] 11933.00 IOPS, 46.61 MiB/s [2024-11-20T05:41:19.635Z] 12047.25 IOPS, 47.06 MiB/s [2024-11-20T05:41:19.635Z] [2024-11-20 06:41:12.951328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.715 [2024-11-20 06:41:12.951475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.715 [2024-11-20 06:41:12.951668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.715 [2024-11-20 06:41:12.951673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.951989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.951997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.952008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.952020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.952032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.952043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.952054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.952065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.716 [2024-11-20 06:41:12.952077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.716 [2024-11-20 06:41:12.952082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:59.717 [2024-11-20 06:41:12.952209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.952231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114440 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.952237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.717 [2024-11-20 06:41:12.952278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.717 [2024-11-20 06:41:12.952289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.717 [2024-11-20 06:41:12.952299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.717 [2024-11-20 06:41:12.952309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.952314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1fc0 is same with the state(6) to be set 00:29:59.717 [2024-11-20 06:41:12.953578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114448 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114456 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114464 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114472 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114480 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114488 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114496 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114504 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114512 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114520 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114528 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.717 [2024-11-20 06:41:12.953815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114544 len:8 PRP1 0x0 PRP2 0x0 00:29:59.717 [2024-11-20 06:41:12.953820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.717 [2024-11-20 06:41:12.953825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.717 [2024-11-20 06:41:12.953828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114552 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114560 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114568 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114576 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114584 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114592 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114600 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114608 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114616 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.953984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.953989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.953993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.953997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.954011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.954016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114632 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.954030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.954034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114640 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.954049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.954053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114648 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.954067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.954071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114656 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.954085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.954089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114664 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.954103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.954107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114672 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.954121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.954125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114680 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.954130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.954135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.965704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.965731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.965742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.965767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.965773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.965780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114696 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.965786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.965793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.965798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.965804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114704 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.965811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.965818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.965823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.965828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114712 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.965835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.965842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.965847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.965853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114720 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.965859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.965866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.965871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.965877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114728 len:8 PRP1 0x0 PRP2 0x0 00:29:59.718 [2024-11-20 06:41:12.965883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.718 [2024-11-20 06:41:12.965890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.718 [2024-11-20 06:41:12.965895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.718 [2024-11-20 06:41:12.965901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114736 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.965907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.965914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.965919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.965925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114744 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.965931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.965938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.965943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.965948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114752 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.965956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.965963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.965968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.965973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114760 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.965980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.965987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.965992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.965997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114768 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114776 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114784 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114792 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114800 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114816 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114824 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114832 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114840 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114848 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114856 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114864 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114872 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113856 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113864 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113872 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113880 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113888 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.719 [2024-11-20 06:41:12.966445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.719 [2024-11-20 06:41:12.966450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.719 [2024-11-20 06:41:12.966455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113896 len:8 PRP1 0x0 PRP2 0x0 00:29:59.719 [2024-11-20 06:41:12.966462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113904 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113912 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113920 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113928 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113936 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113944 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113952 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113960 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113968 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113976 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113984 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113992 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114000 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114008 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114016 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114024 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114032 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114040 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114048 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.720 [2024-11-20 06:41:12.966946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114056 len:8 PRP1 0x0 PRP2 0x0 00:29:59.720 [2024-11-20 06:41:12.966953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.720 [2024-11-20 06:41:12.966959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.720 [2024-11-20 06:41:12.966964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.966970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114064 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.966977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.966984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.966988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.966994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114072 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114080 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114088 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114096 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114104 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114112 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114120 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114128 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114136 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114144 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114152 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114160 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.967274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.967279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.967284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114168 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.967291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.974822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.974831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114176 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.974840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.974854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.974859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114184 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.974866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.974878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.974884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114192 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.974890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.974902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.974908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114200 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.974914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.974926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.974932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114208 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.974938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.974950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.974956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114216 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.974963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.974974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.974984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114224 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.974991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.974998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.975003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.975008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114232 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.975015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.975022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.975027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.975032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114240 len:8 PRP1 0x0 PRP2 0x0 00:29:59.721 [2024-11-20 06:41:12.975039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.721 [2024-11-20 06:41:12.975046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.721 [2024-11-20 06:41:12.975051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.721 [2024-11-20 06:41:12.975056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114248 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114256 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114264 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114272 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114280 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114288 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114296 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114304 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114312 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114320 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114328 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114336 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114344 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114352 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114360 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114368 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114376 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114384 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114392 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114400 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114408 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114416 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114424 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.722 [2024-11-20 06:41:12.975618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.722 [2024-11-20 06:41:12.975624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.722 [2024-11-20 06:41:12.975633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114432 len:8 PRP1 0x0 PRP2 0x0 00:29:59.722 [2024-11-20 06:41:12.975642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.723 [2024-11-20 06:41:12.975651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:59.723 [2024-11-20 06:41:12.975658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:59.723 [2024-11-20 06:41:12.975665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114440 len:8 PRP1 0x0 PRP2 0x0 00:29:59.723 [2024-11-20 06:41:12.975674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.723 [2024-11-20 06:41:12.975723] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:59.723 [2024-11-20 06:41:12.975736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:59.723 12139.78 IOPS, 47.42 MiB/s [2024-11-20T05:41:19.643Z] [2024-11-20 06:41:12.975804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b1fc0 (9): Bad file descriptor 00:29:59.723 [2024-11-20 06:41:12.980282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:59.723 [2024-11-20 06:41:13.049076] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:29:59.723 12117.80 IOPS, 47.34 MiB/s [2024-11-20T05:41:19.643Z] 12218.82 IOPS, 47.73 MiB/s [2024-11-20T05:41:19.643Z] 12296.25 IOPS, 48.03 MiB/s [2024-11-20T05:41:19.643Z] 12380.46 IOPS, 48.36 MiB/s [2024-11-20T05:41:19.643Z] 12433.43 IOPS, 48.57 MiB/s 00:29:59.723 Latency(us) 00:29:59.723 [2024-11-20T05:41:19.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.723 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:59.723 Verification LBA range: start 0x0 length 0x4000 00:29:59.723 NVMe0n1 : 15.00 12492.24 48.80 463.12 0.00 9859.27 532.48 33641.81 00:29:59.723 [2024-11-20T05:41:19.643Z] =================================================================================================================== 00:29:59.723 [2024-11-20T05:41:19.643Z] Total : 12492.24 48.80 463.12 0.00 9859.27 532.48 33641.81 00:29:59.723 Received shutdown signal, test time was about 15.000000 seconds 00:29:59.723 00:29:59.723 Latency(us) 00:29:59.723 [2024-11-20T05:41:19.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.723 [2024-11-20T05:41:19.643Z] =================================================================================================================== 00:29:59.723 [2024-11-20T05:41:19.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2831137 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2831137 /var/tmp/bdevperf.sock 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2831137 ']' 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:59.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:59.723 06:41:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:00.293 06:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:00.293 06:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:30:00.293 06:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:00.293 [2024-11-20 06:41:20.175980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:00.293 06:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:00.554 [2024-11-20 06:41:20.360408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:00.555 06:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:00.815 NVMe0n1 00:30:00.815 06:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:01.076 00:30:01.076 06:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:01.338 00:30:01.338 06:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:01.338 06:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:01.599 06:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:01.859 06:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:05.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:05.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:05.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2832300 00:30:05.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:05.159 06:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2832300 00:30:06.100 { 00:30:06.100 "results": [ 00:30:06.100 { 00:30:06.100 "job": "NVMe0n1", 00:30:06.100 "core_mask": "0x1", 00:30:06.100 "workload": "verify", 00:30:06.100 "status": "finished", 00:30:06.100 "verify_range": { 00:30:06.100 "start": 0, 00:30:06.100 "length": 16384 00:30:06.100 }, 00:30:06.100 "queue_depth": 128, 00:30:06.100 "io_size": 4096, 00:30:06.100 "runtime": 1.010428, 00:30:06.100 "iops": 12810.4130130994, 00:30:06.100 "mibps": 50.04067583241953, 00:30:06.100 "io_failed": 0, 00:30:06.100 "io_timeout": 0, 00:30:06.100 "avg_latency_us": 9957.036143386898, 00:30:06.100 "min_latency_us": 2157.2266666666665, 00:30:06.100 "max_latency_us": 12943.36 00:30:06.100 } 00:30:06.100 ], 00:30:06.100 "core_count": 1 00:30:06.100 } 00:30:06.100 06:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:06.100 [2024-11-20 06:41:19.219181] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:06.100 [2024-11-20 06:41:19.219240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831137 ] 00:30:06.101 [2024-11-20 06:41:19.303882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.101 [2024-11-20 06:41:19.332078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.101 [2024-11-20 06:41:21.546176] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:06.101 [2024-11-20 06:41:21.546215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.101 [2024-11-20 06:41:21.546224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.101 [2024-11-20 06:41:21.546231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.101 [2024-11-20 06:41:21.546236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.101 [2024-11-20 06:41:21.546242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.101 [2024-11-20 06:41:21.546248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.101 [2024-11-20 06:41:21.546253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:06.101 [2024-11-20 06:41:21.546258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:06.101 [2024-11-20 06:41:21.546263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:30:06.101 [2024-11-20 06:41:21.546284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:30:06.101 [2024-11-20 06:41:21.546294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2bfc0 (9): Bad file descriptor 00:30:06.101 [2024-11-20 06:41:21.557039] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:30:06.101 Running I/O for 1 seconds... 00:30:06.101 12816.00 IOPS, 50.06 MiB/s 00:30:06.101 Latency(us) 00:30:06.101 [2024-11-20T05:41:26.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.101 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:06.101 Verification LBA range: start 0x0 length 0x4000 00:30:06.101 NVMe0n1 : 1.01 12810.41 50.04 0.00 0.00 9957.04 2157.23 12943.36 00:30:06.101 [2024-11-20T05:41:26.021Z] =================================================================================================================== 00:30:06.101 [2024-11-20T05:41:26.021Z] Total : 12810.41 50.04 0.00 0.00 9957.04 2157.23 12943.36 00:30:06.101 06:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:06.101 06:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:06.361 06:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:06.361 06:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:06.361 06:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:06.621 06:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:06.882 06:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2831137 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2831137 ']' 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2831137 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2831137 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2831137' 00:30:10.179 killing process with pid 2831137 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2831137 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2831137 00:30:10.179 06:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:10.179 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.439 rmmod nvme_tcp 00:30:10.439 rmmod nvme_fabrics 00:30:10.439 rmmod nvme_keyring 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2827428 ']' 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2827428 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2827428 ']' 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2827428 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2827428 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2827428' 00:30:10.439 killing process with pid 2827428 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2827428 00:30:10.439 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2827428 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.700 06:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.243 00:30:13.243 real 0m40.426s 00:30:13.243 user 2m3.480s 00:30:13.243 sys 0m8.996s 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:13.243 ************************************ 00:30:13.243 END TEST nvmf_failover 00:30:13.243 ************************************ 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.243 ************************************ 00:30:13.243 START TEST nvmf_host_discovery 00:30:13.243 ************************************ 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:13.243 * Looking for test storage... 00:30:13.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:13.243 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.244 --rc genhtml_branch_coverage=1 00:30:13.244 --rc genhtml_function_coverage=1 00:30:13.244 --rc genhtml_legend=1 00:30:13.244 --rc geninfo_all_blocks=1 00:30:13.244 --rc geninfo_unexecuted_blocks=1 00:30:13.244 00:30:13.244 ' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.244 --rc genhtml_branch_coverage=1 00:30:13.244 --rc genhtml_function_coverage=1 00:30:13.244 --rc genhtml_legend=1 00:30:13.244 --rc geninfo_all_blocks=1 00:30:13.244 --rc geninfo_unexecuted_blocks=1 00:30:13.244 00:30:13.244 ' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.244 --rc genhtml_branch_coverage=1 00:30:13.244 --rc genhtml_function_coverage=1 00:30:13.244 --rc genhtml_legend=1 00:30:13.244 --rc geninfo_all_blocks=1 00:30:13.244 --rc geninfo_unexecuted_blocks=1 00:30:13.244 00:30:13.244 ' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.244 --rc genhtml_branch_coverage=1 00:30:13.244 --rc genhtml_function_coverage=1 00:30:13.244 --rc genhtml_legend=1 00:30:13.244 --rc geninfo_all_blocks=1 00:30:13.244 --rc geninfo_unexecuted_blocks=1 00:30:13.244 00:30:13.244 ' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:13.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.244 06:41:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.380 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.380 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.380 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.380 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:21.380 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:21.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:30:21.380 00:30:21.380 --- 10.0.0.2 ping statistics --- 00:30:21.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.380 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:30:21.381 00:30:21.381 --- 10.0.0.1 ping statistics --- 00:30:21.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.381 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2837530 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2837530 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2837530 ']' 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:21.381 06:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.381 [2024-11-20 06:41:40.571819] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:21.381 [2024-11-20 06:41:40.571887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.381 [2024-11-20 06:41:40.672072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.381 [2024-11-20 06:41:40.722961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.381 [2024-11-20 06:41:40.723011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.381 [2024-11-20 06:41:40.723019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.381 [2024-11-20 06:41:40.723026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.381 [2024-11-20 06:41:40.723032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.381 [2024-11-20 06:41:40.723832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.642 [2024-11-20 06:41:41.435385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.642 [2024-11-20 06:41:41.447643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.642 null0 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.642 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.643 null1 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2837860 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2837860 /tmp/host.sock 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2837860 ']' 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:21.643 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:21.643 06:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:21.643 [2024-11-20 06:41:41.543126] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:21.643 [2024-11-20 06:41:41.543187] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837860 ] 00:30:21.903 [2024-11-20 06:41:41.636496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.903 [2024-11-20 06:41:41.690172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:22.476 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:22.739 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 [2024-11-20 06:41:42.730978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:23.001 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:23.002 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.262 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:30:23.262 06:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:30:23.522 [2024-11-20 06:41:43.434735] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:23.522 [2024-11-20 06:41:43.434759] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:23.522 [2024-11-20 06:41:43.434773] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:23.782 [2024-11-20 06:41:43.563187] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:23.782 [2024-11-20 06:41:43.622871] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:23.782 [2024-11-20 06:41:43.623848] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15c98c0:1 started. 00:30:23.782 [2024-11-20 06:41:43.625462] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:23.782 [2024-11-20 06:41:43.625480] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:23.782 [2024-11-20 06:41:43.632848] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15c98c0 was disconnected and freed. delete nvme_qpair. 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:24.354 06:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.354 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:24.616 [2024-11-20 06:41:44.431636] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15c9aa0:1 started. 00:30:24.616 [2024-11-20 06:41:44.435155] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15c9aa0 was disconnected and freed. delete nvme_qpair. 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.616 [2024-11-20 06:41:44.519449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:24.616 [2024-11-20 06:41:44.520421] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:24.616 [2024-11-20 06:41:44.520444] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:24.616 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:24.897 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.898 [2024-11-20 06:41:44.650079] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:24.898 06:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:30:25.158 [2024-11-20 06:41:44.952670] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:30:25.158 [2024-11-20 06:41:44.952709] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:25.158 [2024-11-20 06:41:44.952718] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:25.158 [2024-11-20 06:41:44.952724] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.102 [2024-11-20 06:41:45.798990] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:26.102 [2024-11-20 06:41:45.799011] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:26.102 [2024-11-20 06:41:45.806260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.102 [2024-11-20 06:41:45.806274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.102 [2024-11-20 06:41:45.806281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.102 [2024-11-20 06:41:45.806287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.102 [2024-11-20 06:41:45.806292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.102 [2024-11-20 06:41:45.806297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.102 [2024-11-20 06:41:45.806303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.102 [2024-11-20 06:41:45.806308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.102 [2024-11-20 06:41:45.806313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599fd0 is same with the state(6) to be set 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.102 [2024-11-20 06:41:45.816276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599fd0 (9): Bad file descriptor 00:30:26.102 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.102 [2024-11-20 06:41:45.826311] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:26.102 [2024-11-20 06:41:45.826319] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:26.102 [2024-11-20 06:41:45.826323] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:26.102 [2024-11-20 06:41:45.826327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:26.102 [2024-11-20 06:41:45.826340] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:26.102 [2024-11-20 06:41:45.826624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.102 [2024-11-20 06:41:45.826635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599fd0 with addr=10.0.0.2, port=4420 00:30:26.102 [2024-11-20 06:41:45.826641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599fd0 is same with the state(6) to be set 00:30:26.102 [2024-11-20 06:41:45.826650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599fd0 (9): Bad file descriptor 00:30:26.102 [2024-11-20 06:41:45.826657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:26.102 [2024-11-20 06:41:45.826662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:26.102 [2024-11-20 06:41:45.826668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:26.102 [2024-11-20 06:41:45.826676] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:26.102 [2024-11-20 06:41:45.826681] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:26.102 [2024-11-20 06:41:45.826684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:26.102 [2024-11-20 06:41:45.836369] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:26.102 [2024-11-20 06:41:45.836378] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:26.102 [2024-11-20 06:41:45.836381] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:26.102 [2024-11-20 06:41:45.836384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:26.102 [2024-11-20 06:41:45.836394] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:26.102 [2024-11-20 06:41:45.836679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.102 [2024-11-20 06:41:45.836688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599fd0 with addr=10.0.0.2, port=4420 00:30:26.102 [2024-11-20 06:41:45.836693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599fd0 is same with the state(6) to be set 00:30:26.103 [2024-11-20 06:41:45.836701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599fd0 (9): Bad file descriptor 00:30:26.103 [2024-11-20 06:41:45.836712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:26.103 [2024-11-20 06:41:45.836717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:26.103 [2024-11-20 06:41:45.836722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:26.103 [2024-11-20 06:41:45.836726] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:26.103 [2024-11-20 06:41:45.836729] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:26.103 [2024-11-20 06:41:45.836732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:26.103 [2024-11-20 06:41:45.846422] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:26.103 [2024-11-20 06:41:45.846433] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:26.103 [2024-11-20 06:41:45.846436] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:26.103 [2024-11-20 06:41:45.846440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:26.103 [2024-11-20 06:41:45.846450] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:26.103 [2024-11-20 06:41:45.846735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.103 [2024-11-20 06:41:45.846744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599fd0 with addr=10.0.0.2, port=4420 00:30:26.103 [2024-11-20 06:41:45.846753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599fd0 is same with the state(6) to be set 00:30:26.103 [2024-11-20 06:41:45.846761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599fd0 (9): Bad file descriptor 00:30:26.103 [2024-11-20 06:41:45.846769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:26.103 [2024-11-20 06:41:45.846774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:26.103 [2024-11-20 06:41:45.846781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:26.103 [2024-11-20 06:41:45.846786] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:26.103 [2024-11-20 06:41:45.846789] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:26.103 [2024-11-20 06:41:45.846792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:26.103 [2024-11-20 06:41:45.856478] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:26.103 [2024-11-20 06:41:45.856488] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:26.103 [2024-11-20 06:41:45.856491] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:26.103 [2024-11-20 06:41:45.856494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:26.103 [2024-11-20 06:41:45.856504] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.103 [2024-11-20 06:41:45.856798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.103 [2024-11-20 06:41:45.856808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599fd0 with addr=10.0.0.2, port=4420 00:30:26.103 [2024-11-20 06:41:45.856814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599fd0 is same with the state(6) to be set 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.103 [2024-11-20 06:41:45.856822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599fd0 (9): Bad file descriptor 00:30:26.103 [2024-11-20 06:41:45.856834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:26.103 [2024-11-20 06:41:45.856839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:26.103 [2024-11-20 06:41:45.856844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:26.103 [2024-11-20 06:41:45.856848] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:26.103 [2024-11-20 06:41:45.856851] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:26.103 [2024-11-20 06:41:45.856854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.103 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:26.103 [2024-11-20 06:41:45.866533] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:26.103 [2024-11-20 06:41:45.866544] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:26.103 [2024-11-20 06:41:45.866547] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:26.103 [2024-11-20 06:41:45.866551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:26.103 [2024-11-20 06:41:45.866562] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:26.103 [2024-11-20 06:41:45.866975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.103 [2024-11-20 06:41:45.867005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599fd0 with addr=10.0.0.2, port=4420 00:30:26.103 [2024-11-20 06:41:45.867014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599fd0 is same with the state(6) to be set 00:30:26.103 [2024-11-20 06:41:45.867028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599fd0 (9): Bad file descriptor 00:30:26.103 [2024-11-20 06:41:45.867037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:26.103 [2024-11-20 06:41:45.867042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:26.103 [2024-11-20 06:41:45.867048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:26.103 [2024-11-20 06:41:45.867054] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:26.103 [2024-11-20 06:41:45.867058] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:26.103 [2024-11-20 06:41:45.867061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:26.103 [2024-11-20 06:41:45.876592] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:26.103 [2024-11-20 06:41:45.876603] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:26.103 [2024-11-20 06:41:45.876606] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:26.103 [2024-11-20 06:41:45.876610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:26.103 [2024-11-20 06:41:45.876622] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:26.104 [2024-11-20 06:41:45.877002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.104 [2024-11-20 06:41:45.877033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599fd0 with addr=10.0.0.2, port=4420 00:30:26.104 [2024-11-20 06:41:45.877042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599fd0 is same with the state(6) to be set 00:30:26.104 [2024-11-20 06:41:45.877057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599fd0 (9): Bad file descriptor 00:30:26.104 [2024-11-20 06:41:45.877074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:26.104 [2024-11-20 06:41:45.877082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:26.104 [2024-11-20 06:41:45.877088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:26.104 [2024-11-20 06:41:45.877093] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:26.104 [2024-11-20 06:41:45.877103] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:26.104 [2024-11-20 06:41:45.877106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:26.104 [2024-11-20 06:41:45.885485] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:26.104 [2024-11-20 06:41:45.885501] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.104 06:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.104 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:26.104 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:26.104 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:26.104 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.104 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:26.104 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.104 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.365 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.366 06:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.749 [2024-11-20 06:41:47.223912] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:27.749 [2024-11-20 06:41:47.223925] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:27.749 [2024-11-20 06:41:47.223934] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:27.749 [2024-11-20 06:41:47.311185] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:27.749 [2024-11-20 06:41:47.580468] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:30:27.749 [2024-11-20 06:41:47.581136] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x15b11d0:1 started. 00:30:27.749 [2024-11-20 06:41:47.582460] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:27.749 [2024-11-20 06:41:47.582482] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.749 request: 00:30:27.749 { 00:30:27.749 "name": "nvme", 00:30:27.749 "trtype": "tcp", 00:30:27.749 "traddr": "10.0.0.2", 00:30:27.749 "adrfam": "ipv4", 00:30:27.749 "trsvcid": "8009", 00:30:27.749 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:27.749 "wait_for_attach": true, 00:30:27.749 "method": "bdev_nvme_start_discovery", 00:30:27.749 "req_id": 1 00:30:27.749 } 00:30:27.749 Got JSON-RPC error response 00:30:27.749 response: 00:30:27.749 { 00:30:27.749 "code": -17, 00:30:27.749 "message": "File exists" 00:30:27.749 } 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:27.749 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.750 [2024-11-20 06:41:47.634409] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x15b11d0 was disconnected and freed. delete nvme_qpair. 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.750 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.011 request: 00:30:28.011 { 00:30:28.011 "name": "nvme_second", 00:30:28.011 "trtype": "tcp", 00:30:28.011 "traddr": "10.0.0.2", 00:30:28.011 "adrfam": "ipv4", 00:30:28.011 "trsvcid": "8009", 00:30:28.011 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:28.011 "wait_for_attach": true, 00:30:28.011 "method": "bdev_nvme_start_discovery", 00:30:28.011 "req_id": 1 00:30:28.011 } 00:30:28.011 Got JSON-RPC error response 00:30:28.011 response: 00:30:28.011 { 00:30:28.011 "code": -17, 00:30:28.011 "message": "File exists" 00:30:28.011 } 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.011 06:41:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.070 [2024-11-20 06:41:48.837732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.070 [2024-11-20 06:41:48.837759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b3900 with addr=10.0.0.2, port=8010 00:30:29.070 [2024-11-20 06:41:48.837769] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:29.070 [2024-11-20 06:41:48.837775] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:29.070 [2024-11-20 06:41:48.837780] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:30.027 [2024-11-20 06:41:49.840243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.027 [2024-11-20 06:41:49.840261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b3900 with addr=10.0.0.2, port=8010 00:30:30.027 [2024-11-20 06:41:49.840269] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:30.027 [2024-11-20 06:41:49.840274] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:30.027 [2024-11-20 06:41:49.840278] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:30.968 [2024-11-20 06:41:50.842232] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:30.968 request: 00:30:30.968 { 00:30:30.968 "name": "nvme_second", 00:30:30.968 "trtype": "tcp", 00:30:30.968 "traddr": "10.0.0.2", 00:30:30.968 "adrfam": "ipv4", 00:30:30.968 "trsvcid": "8010", 00:30:30.968 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:30.968 "wait_for_attach": false, 00:30:30.968 "attach_timeout_ms": 3000, 00:30:30.968 "method": "bdev_nvme_start_discovery", 00:30:30.968 "req_id": 1 00:30:30.968 } 00:30:30.968 Got JSON-RPC error response 00:30:30.968 response: 00:30:30.968 { 00:30:30.968 "code": -110, 00:30:30.968 "message": "Connection timed out" 00:30:30.968 } 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:30.968 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2837860 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.228 rmmod nvme_tcp 00:30:31.228 rmmod nvme_fabrics 00:30:31.228 rmmod nvme_keyring 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2837530 ']' 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2837530 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 2837530 ']' 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 2837530 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:31.228 06:41:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2837530 00:30:31.228 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:31.228 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:31.228 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2837530' 00:30:31.228 killing process with pid 2837530 00:30:31.229 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 2837530 00:30:31.229 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 2837530 00:30:31.229 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.229 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.229 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.229 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.489 06:41:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.402 00:30:33.402 real 0m20.595s 00:30:33.402 user 0m23.873s 00:30:33.402 sys 0m7.337s 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.402 ************************************ 00:30:33.402 END TEST nvmf_host_discovery 00:30:33.402 ************************************ 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.402 ************************************ 00:30:33.402 START TEST nvmf_host_multipath_status 00:30:33.402 ************************************ 00:30:33.402 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:33.664 * Looking for test storage... 00:30:33.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:33.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.664 --rc genhtml_branch_coverage=1 00:30:33.664 --rc genhtml_function_coverage=1 00:30:33.664 --rc genhtml_legend=1 00:30:33.664 --rc geninfo_all_blocks=1 00:30:33.664 --rc geninfo_unexecuted_blocks=1 00:30:33.664 00:30:33.664 ' 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:33.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.664 --rc genhtml_branch_coverage=1 00:30:33.664 --rc genhtml_function_coverage=1 00:30:33.664 --rc genhtml_legend=1 00:30:33.664 --rc geninfo_all_blocks=1 00:30:33.664 --rc geninfo_unexecuted_blocks=1 00:30:33.664 00:30:33.664 ' 00:30:33.664 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:33.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.664 --rc genhtml_branch_coverage=1 00:30:33.664 --rc genhtml_function_coverage=1 00:30:33.665 --rc genhtml_legend=1 00:30:33.665 --rc geninfo_all_blocks=1 00:30:33.665 --rc geninfo_unexecuted_blocks=1 00:30:33.665 00:30:33.665 ' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:33.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.665 --rc genhtml_branch_coverage=1 00:30:33.665 --rc genhtml_function_coverage=1 00:30:33.665 --rc genhtml_legend=1 00:30:33.665 --rc geninfo_all_blocks=1 00:30:33.665 --rc geninfo_unexecuted_blocks=1 00:30:33.665 00:30:33.665 ' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:33.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:33.665 06:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:41.807 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:41.807 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:41.807 Found net devices under 0000:31:00.0: cvl_0_0 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:41.807 Found net devices under 0000:31:00.1: cvl_0_1 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.807 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.808 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.808 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.808 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.808 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.808 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.808 06:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:30:41.808 00:30:41.808 --- 10.0.0.2 ping statistics --- 00:30:41.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.808 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:30:41.808 00:30:41.808 --- 10.0.0.1 ping statistics --- 00:30:41.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.808 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2844146 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2844146 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2844146 ']' 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:41.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:41.808 [2024-11-20 06:42:01.308927] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:41.808 [2024-11-20 06:42:01.308995] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.808 [2024-11-20 06:42:01.410321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:41.808 [2024-11-20 06:42:01.463274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.808 [2024-11-20 06:42:01.463323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.808 [2024-11-20 06:42:01.463331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.808 [2024-11-20 06:42:01.463339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.808 [2024-11-20 06:42:01.463345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.808 [2024-11-20 06:42:01.465164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.808 [2024-11-20 06:42:01.465169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2844146 00:30:42.380 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:42.642 [2024-11-20 06:42:02.333773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.642 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:42.902 Malloc0 00:30:42.902 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:42.902 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.164 06:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.425 [2024-11-20 06:42:03.162912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.425 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:43.685 [2024-11-20 06:42:03.359418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2844550 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2844550 /var/tmp/bdevperf.sock 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2844550 ']' 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:43.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:43.686 06:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:44.628 06:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:44.628 06:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:30:44.628 06:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:44.628 06:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:44.888 Nvme0n1 00:30:44.889 06:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:45.460 Nvme0n1 00:30:45.460 06:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:45.460 06:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:47.371 06:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:47.371 06:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:47.631 06:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:47.631 06:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:49.014 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.015 06:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:49.275 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.275 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:49.275 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.275 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.536 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:49.796 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.796 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:49.796 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:50.057 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:50.057 06:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:51.442 06:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:51.442 06:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:51.442 06:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.442 06:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.442 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:51.703 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.703 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:51.703 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.703 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:51.963 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.963 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:51.963 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:51.963 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:52.222 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.222 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:52.222 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.222 06:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:52.222 06:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:52.222 06:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:52.222 06:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:52.482 06:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:52.741 06:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:53.682 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:53.682 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:53.682 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.682 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:53.942 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.942 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:53.942 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.942 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.942 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:53.942 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.942 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.200 06:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:54.200 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.200 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:54.200 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:54.200 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.459 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.459 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:54.459 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.459 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:54.720 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.720 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:54.720 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:54.720 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:54.720 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.720 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:54.720 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:54.980 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:55.241 06:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:56.181 06:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:56.181 06:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:56.181 06:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.181 06:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.442 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.702 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.702 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.702 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.702 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:56.962 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.962 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:56.962 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.962 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:57.223 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.223 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:57.223 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:57.223 06:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.223 06:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:57.223 06:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:57.223 06:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:57.484 06:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:57.745 06:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:58.687 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:58.687 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:58.687 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.687 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.948 06:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.209 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.209 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.209 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.209 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.470 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:59.730 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.730 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:59.730 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:59.990 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:00.252 06:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:01.195 06:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:01.195 06:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:01.195 06:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.195 06:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:01.455 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:01.455 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:01.456 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.456 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:01.456 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.456 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:01.456 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.456 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:01.717 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.717 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:01.717 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.717 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.977 06:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:02.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.238 06:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:02.498 06:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:02.498 06:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:02.759 06:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:02.759 06:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:03.700 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:03.700 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:03.961 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.961 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:03.961 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:03.961 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:03.961 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:03.961 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:04.223 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.223 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:04.223 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.223 06:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:04.483 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.483 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:04.483 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.483 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:04.483 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.483 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:04.484 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:04.484 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.744 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:04.744 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:04.744 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.744 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:05.005 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.005 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:05.005 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:05.005 06:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:05.266 06:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:06.206 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:06.206 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:06.207 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.207 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:06.467 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.467 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:06.467 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.467 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:06.727 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.727 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:06.727 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.727 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:06.727 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.987 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:06.987 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.987 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:06.987 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.987 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:06.987 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.988 06:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:07.248 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.248 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:07.248 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.248 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:07.508 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.508 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:07.509 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:07.509 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:07.768 06:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:08.707 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:08.707 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:08.707 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.707 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:08.969 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.969 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:08.969 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.969 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:09.229 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.229 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:09.229 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.229 06:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:09.229 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.229 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:09.229 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.229 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:09.495 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.495 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:09.495 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.495 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:09.763 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.763 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:09.763 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.763 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:09.763 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.763 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:09.763 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:10.024 06:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:10.284 06:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:11.227 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:11.227 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:11.227 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.227 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.487 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.487 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:11.487 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.487 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.748 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:12.008 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.008 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:12.008 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.008 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.268 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.268 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:12.268 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.268 06:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:12.268 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:12.268 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2844550 00:31:12.268 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2844550 ']' 00:31:12.268 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2844550 00:31:12.268 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:31:12.268 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:12.268 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2844550 00:31:12.531 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:31:12.531 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:31:12.531 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2844550' 00:31:12.531 killing process with pid 2844550 00:31:12.531 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2844550 00:31:12.531 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2844550 00:31:12.531 { 00:31:12.531 "results": [ 00:31:12.531 { 00:31:12.531 "job": "Nvme0n1", 00:31:12.531 "core_mask": "0x4", 00:31:12.531 "workload": "verify", 00:31:12.531 "status": "terminated", 00:31:12.531 "verify_range": { 00:31:12.531 "start": 0, 00:31:12.531 "length": 16384 00:31:12.531 }, 00:31:12.531 "queue_depth": 128, 00:31:12.531 "io_size": 4096, 00:31:12.531 "runtime": 26.999783, 00:31:12.531 "iops": 11940.355224336432, 00:31:12.531 "mibps": 46.64201259506419, 00:31:12.531 "io_failed": 0, 00:31:12.531 "io_timeout": 0, 00:31:12.531 "avg_latency_us": 10684.400016295116, 00:31:12.531 "min_latency_us": 230.4, 00:31:12.531 "max_latency_us": 3019898.88 00:31:12.531 } 00:31:12.531 ], 00:31:12.531 "core_count": 1 00:31:12.531 } 00:31:12.531 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2844550 00:31:12.531 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:12.531 [2024-11-20 06:42:03.436641] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:31:12.531 [2024-11-20 06:42:03.436720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2844550 ] 00:31:12.531 [2024-11-20 06:42:03.529339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.531 [2024-11-20 06:42:03.579434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.531 Running I/O for 90 seconds... 00:31:12.531 10257.00 IOPS, 40.07 MiB/s [2024-11-20T05:42:32.451Z] 10768.00 IOPS, 42.06 MiB/s [2024-11-20T05:42:32.451Z] 11041.00 IOPS, 43.13 MiB/s [2024-11-20T05:42:32.451Z] 11376.75 IOPS, 44.44 MiB/s [2024-11-20T05:42:32.451Z] 11717.40 IOPS, 45.77 MiB/s [2024-11-20T05:42:32.451Z] 11965.17 IOPS, 46.74 MiB/s [2024-11-20T05:42:32.451Z] 12094.71 IOPS, 47.24 MiB/s [2024-11-20T05:42:32.451Z] 12209.50 IOPS, 47.69 MiB/s [2024-11-20T05:42:32.451Z] 12288.56 IOPS, 48.00 MiB/s [2024-11-20T05:42:32.451Z] 12363.60 IOPS, 48.30 MiB/s [2024-11-20T05:42:32.451Z] 12409.27 IOPS, 48.47 MiB/s [2024-11-20T05:42:32.451Z] 12447.33 IOPS, 48.62 MiB/s [2024-11-20T05:42:32.451Z] [2024-11-20 06:42:17.260827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.531 [2024-11-20 06:42:17.260858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:12.531 [2024-11-20 06:42:17.260893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.531 [2024-11-20 06:42:17.260900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:12.531 [2024-11-20 06:42:17.260911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.531 [2024-11-20 06:42:17.260916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.260927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.260932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.260943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.260948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.260958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.260963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.260973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.260979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.260989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.260994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.261004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.261009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.261020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.261039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.261050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.261055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.261066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.261071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.261081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.261086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.261096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.261102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.261112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.261118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.262985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.262990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:12.532 [2024-11-20 06:42:17.263004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.532 [2024-11-20 06:42:17.263010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.533 [2024-11-20 06:42:17.263671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.533 [2024-11-20 06:42:17.263689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.533 [2024-11-20 06:42:17.263710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.533 [2024-11-20 06:42:17.263729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.533 [2024-11-20 06:42:17.263753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:12.533 [2024-11-20 06:42:17.263767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.534 [2024-11-20 06:42:17.263772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:17.263787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.534 [2024-11-20 06:42:17.263792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:17.263806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.534 [2024-11-20 06:42:17.263812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:12.534 11497.92 IOPS, 44.91 MiB/s [2024-11-20T05:42:32.454Z] 10676.64 IOPS, 41.71 MiB/s [2024-11-20T05:42:32.454Z] 9964.87 IOPS, 38.93 MiB/s [2024-11-20T05:42:32.454Z] 10148.19 IOPS, 39.64 MiB/s [2024-11-20T05:42:32.454Z] 10305.76 IOPS, 40.26 MiB/s [2024-11-20T05:42:32.454Z] 10619.78 IOPS, 41.48 MiB/s [2024-11-20T05:42:32.454Z] 10945.42 IOPS, 42.76 MiB/s [2024-11-20T05:42:32.454Z] 11193.50 IOPS, 43.72 MiB/s [2024-11-20T05:42:32.454Z] 11274.29 IOPS, 44.04 MiB/s [2024-11-20T05:42:32.454Z] 11341.41 IOPS, 44.30 MiB/s [2024-11-20T05:42:32.454Z] 11524.87 IOPS, 45.02 MiB/s [2024-11-20T05:42:32.454Z] 11737.50 IOPS, 45.85 MiB/s [2024-11-20T05:42:32.454Z] [2024-11-20 06:42:30.017810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.017992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.017998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:12.534 [2024-11-20 06:42:30.018389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.534 [2024-11-20 06:42:30.018394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.018589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.018594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:12.535 [2024-11-20 06:42:30.019213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:12.535 [2024-11-20 06:42:30.019411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:12.535 [2024-11-20 06:42:30.019416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:12.535 11891.40 IOPS, 46.45 MiB/s [2024-11-20T05:42:32.455Z] 11925.54 IOPS, 46.58 MiB/s [2024-11-20T05:42:32.455Z] Received shutdown signal, test time was about 27.000391 seconds 00:31:12.535 00:31:12.535 Latency(us) 00:31:12.536 [2024-11-20T05:42:32.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:12.536 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:12.536 Verification LBA range: start 0x0 length 0x4000 00:31:12.536 Nvme0n1 : 27.00 11940.36 46.64 0.00 0.00 10684.40 230.40 3019898.88 00:31:12.536 [2024-11-20T05:42:32.456Z] =================================================================================================================== 00:31:12.536 [2024-11-20T05:42:32.456Z] Total : 11940.36 46.64 0.00 0.00 10684.40 230.40 3019898.88 00:31:12.536 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.796 rmmod nvme_tcp 00:31:12.796 rmmod nvme_fabrics 00:31:12.796 rmmod nvme_keyring 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2844146 ']' 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2844146 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2844146 ']' 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2844146 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2844146 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2844146' 00:31:12.796 killing process with pid 2844146 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2844146 00:31:12.796 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2844146 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.055 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.056 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.056 06:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.052 06:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.052 00:31:15.052 real 0m41.590s 00:31:15.052 user 1m47.231s 00:31:15.052 sys 0m11.782s 00:31:15.053 06:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:15.053 06:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:15.053 ************************************ 00:31:15.053 END TEST nvmf_host_multipath_status 00:31:15.053 ************************************ 00:31:15.053 06:42:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:15.053 06:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:15.053 06:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:15.053 06:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:15.372 ************************************ 00:31:15.372 START TEST nvmf_discovery_remove_ifc 00:31:15.372 ************************************ 00:31:15.372 06:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:15.372 * Looking for test storage... 00:31:15.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:15.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.372 --rc genhtml_branch_coverage=1 00:31:15.372 --rc genhtml_function_coverage=1 00:31:15.372 --rc genhtml_legend=1 00:31:15.372 --rc geninfo_all_blocks=1 00:31:15.372 --rc geninfo_unexecuted_blocks=1 00:31:15.372 00:31:15.372 ' 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:15.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.372 --rc genhtml_branch_coverage=1 00:31:15.372 --rc genhtml_function_coverage=1 00:31:15.372 --rc genhtml_legend=1 00:31:15.372 --rc geninfo_all_blocks=1 00:31:15.372 --rc geninfo_unexecuted_blocks=1 00:31:15.372 00:31:15.372 ' 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:15.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.372 --rc genhtml_branch_coverage=1 00:31:15.372 --rc genhtml_function_coverage=1 00:31:15.372 --rc genhtml_legend=1 00:31:15.372 --rc geninfo_all_blocks=1 00:31:15.372 --rc geninfo_unexecuted_blocks=1 00:31:15.372 00:31:15.372 ' 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:15.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.372 --rc genhtml_branch_coverage=1 00:31:15.372 --rc genhtml_function_coverage=1 00:31:15.372 --rc genhtml_legend=1 00:31:15.372 --rc geninfo_all_blocks=1 00:31:15.372 --rc geninfo_unexecuted_blocks=1 00:31:15.372 00:31:15.372 ' 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:15.372 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:15.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.373 06:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.521 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:23.522 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:23.522 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:23.522 Found net devices under 0000:31:00.0: cvl_0_0 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:23.522 Found net devices under 0000:31:00.1: cvl_0_1 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:31:23.522 00:31:23.522 --- 10.0.0.2 ping statistics --- 00:31:23.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.522 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:31:23.522 00:31:23.522 --- 10.0.0.1 ping statistics --- 00:31:23.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.522 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2854941 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2854941 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2854941 ']' 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:23.522 06:42:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.522 [2024-11-20 06:42:42.648585] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:31:23.522 [2024-11-20 06:42:42.648651] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.523 [2024-11-20 06:42:42.745351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.523 [2024-11-20 06:42:42.778434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.523 [2024-11-20 06:42:42.778464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.523 [2024-11-20 06:42:42.778470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.523 [2024-11-20 06:42:42.778475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.523 [2024-11-20 06:42:42.778482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.523 [2024-11-20 06:42:42.779029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.783 [2024-11-20 06:42:43.497505] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.783 [2024-11-20 06:42:43.505653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:23.783 null0 00:31:23.783 [2024-11-20 06:42:43.537670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2855285 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2855285 /tmp/host.sock 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2855285 ']' 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:23.783 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:23.783 06:42:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.783 [2024-11-20 06:42:43.611303] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:31:23.783 [2024-11-20 06:42:43.611347] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855285 ] 00:31:23.783 [2024-11-20 06:42:43.698647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.042 [2024-11-20 06:42:43.734499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.624 06:42:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.006 [2024-11-20 06:42:45.526722] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:26.006 [2024-11-20 06:42:45.526743] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:26.006 [2024-11-20 06:42:45.526761] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:26.006 [2024-11-20 06:42:45.654169] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:26.006 [2024-11-20 06:42:45.875422] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:26.006 [2024-11-20 06:42:45.876420] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1433550:1 started. 00:31:26.006 [2024-11-20 06:42:45.878000] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:26.006 [2024-11-20 06:42:45.878044] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:26.006 [2024-11-20 06:42:45.878065] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:26.006 [2024-11-20 06:42:45.878079] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:26.006 [2024-11-20 06:42:45.878100] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.006 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.266 [2024-11-20 06:42:45.926524] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1433550 was disconnected and freed. delete nvme_qpair. 00:31:26.266 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:26.266 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:26.266 06:42:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:26.267 06:42:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.206 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.206 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.206 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.206 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.206 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.206 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.206 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.483 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.483 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:27.483 06:42:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:28.422 06:42:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.392 06:42:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:30.773 06:42:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.713 [2024-11-20 06:42:51.318885] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:31.713 [2024-11-20 06:42:51.318927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.713 [2024-11-20 06:42:51.318937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.713 [2024-11-20 06:42:51.318944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.713 [2024-11-20 06:42:51.318950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.713 [2024-11-20 06:42:51.318955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.713 [2024-11-20 06:42:51.318961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.713 [2024-11-20 06:42:51.318966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.713 [2024-11-20 06:42:51.318972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.713 [2024-11-20 06:42:51.318978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.713 [2024-11-20 06:42:51.318983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.713 [2024-11-20 06:42:51.318988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fec0 is same with the state(6) to be set 00:31:31.713 [2024-11-20 06:42:51.328907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140fec0 (9): Bad file descriptor 00:31:31.713 06:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.713 06:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.713 06:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.713 06:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.713 06:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.713 06:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.713 06:42:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.713 [2024-11-20 06:42:51.338943] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:31.713 [2024-11-20 06:42:51.338953] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:31.713 [2024-11-20 06:42:51.338957] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:31.713 [2024-11-20 06:42:51.338961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:31.713 [2024-11-20 06:42:51.338978] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:32.653 [2024-11-20 06:42:52.381833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:32.653 [2024-11-20 06:42:52.381934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140fec0 with addr=10.0.0.2, port=4420 00:31:32.653 [2024-11-20 06:42:52.381969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fec0 is same with the state(6) to be set 00:31:32.653 [2024-11-20 06:42:52.382034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140fec0 (9): Bad file descriptor 00:31:32.653 [2024-11-20 06:42:52.383168] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:31:32.653 [2024-11-20 06:42:52.383239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:32.653 [2024-11-20 06:42:52.383262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:32.653 [2024-11-20 06:42:52.383287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:32.653 [2024-11-20 06:42:52.383307] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:32.653 [2024-11-20 06:42:52.383324] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:32.653 [2024-11-20 06:42:52.383337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:32.653 [2024-11-20 06:42:52.383360] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:32.653 [2024-11-20 06:42:52.383374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.653 06:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.653 06:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:32.653 06:42:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:33.594 [2024-11-20 06:42:53.385794] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:33.594 [2024-11-20 06:42:53.385812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:33.594 [2024-11-20 06:42:53.385821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:33.594 [2024-11-20 06:42:53.385827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:33.594 [2024-11-20 06:42:53.385837] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:31:33.594 [2024-11-20 06:42:53.385842] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:33.594 [2024-11-20 06:42:53.385846] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:33.594 [2024-11-20 06:42:53.385849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:33.594 [2024-11-20 06:42:53.385869] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:33.594 [2024-11-20 06:42:53.385888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.594 [2024-11-20 06:42:53.385896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.594 [2024-11-20 06:42:53.385905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.594 [2024-11-20 06:42:53.385910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.594 [2024-11-20 06:42:53.385916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.594 [2024-11-20 06:42:53.385921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.594 [2024-11-20 06:42:53.385927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.594 [2024-11-20 06:42:53.385932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.594 [2024-11-20 06:42:53.385937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.594 [2024-11-20 06:42:53.385942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.594 [2024-11-20 06:42:53.385947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:31:33.594 [2024-11-20 06:42:53.386416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ff600 (9): Bad file descriptor 00:31:33.594 [2024-11-20 06:42:53.387426] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:33.594 [2024-11-20 06:42:53.387435] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.594 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:33.855 06:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:34.796 06:42:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:35.738 [2024-11-20 06:42:55.442766] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:35.738 [2024-11-20 06:42:55.442780] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:35.738 [2024-11-20 06:42:55.442791] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:35.738 [2024-11-20 06:42:55.571163] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:35.999 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:36.000 06:42:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.000 [2024-11-20 06:42:55.752199] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:31:36.000 [2024-11-20 06:42:55.752897] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x141a540:1 started. 00:31:36.000 [2024-11-20 06:42:55.753786] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:36.000 [2024-11-20 06:42:55.753814] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:36.000 [2024-11-20 06:42:55.753828] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:36.000 [2024-11-20 06:42:55.753839] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:36.000 [2024-11-20 06:42:55.753845] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:36.000 [2024-11-20 06:42:55.800361] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x141a540 was disconnected and freed. delete nvme_qpair. 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2855285 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2855285 ']' 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2855285 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2855285 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2855285' 00:31:36.943 killing process with pid 2855285 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2855285 00:31:36.943 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2855285 00:31:37.204 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:37.204 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.204 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:31:37.204 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.204 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:31:37.204 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.204 06:42:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.204 rmmod nvme_tcp 00:31:37.204 rmmod nvme_fabrics 00:31:37.204 rmmod nvme_keyring 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2854941 ']' 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2854941 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2854941 ']' 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2854941 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:31:37.204 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:37.205 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2854941 00:31:37.205 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:37.205 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:37.205 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2854941' 00:31:37.205 killing process with pid 2854941 00:31:37.205 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2854941 00:31:37.205 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2854941 00:31:37.466 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.466 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.467 06:42:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.377 06:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:39.377 00:31:39.377 real 0m24.293s 00:31:39.377 user 0m29.378s 00:31:39.377 sys 0m7.083s 00:31:39.377 06:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:39.377 06:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.377 ************************************ 00:31:39.377 END TEST nvmf_discovery_remove_ifc 00:31:39.378 ************************************ 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.639 ************************************ 00:31:39.639 START TEST nvmf_identify_kernel_target 00:31:39.639 ************************************ 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:39.639 * Looking for test storage... 00:31:39.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.639 --rc genhtml_branch_coverage=1 00:31:39.639 --rc genhtml_function_coverage=1 00:31:39.639 --rc genhtml_legend=1 00:31:39.639 --rc geninfo_all_blocks=1 00:31:39.639 --rc geninfo_unexecuted_blocks=1 00:31:39.639 00:31:39.639 ' 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.639 --rc genhtml_branch_coverage=1 00:31:39.639 --rc genhtml_function_coverage=1 00:31:39.639 --rc genhtml_legend=1 00:31:39.639 --rc geninfo_all_blocks=1 00:31:39.639 --rc geninfo_unexecuted_blocks=1 00:31:39.639 00:31:39.639 ' 00:31:39.639 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.639 --rc genhtml_branch_coverage=1 00:31:39.639 --rc genhtml_function_coverage=1 00:31:39.639 --rc genhtml_legend=1 00:31:39.639 --rc geninfo_all_blocks=1 00:31:39.639 --rc geninfo_unexecuted_blocks=1 00:31:39.639 00:31:39.639 ' 00:31:39.640 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:39.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:39.640 --rc genhtml_branch_coverage=1 00:31:39.640 --rc genhtml_function_coverage=1 00:31:39.640 --rc genhtml_legend=1 00:31:39.640 --rc geninfo_all_blocks=1 00:31:39.640 --rc geninfo_unexecuted_blocks=1 00:31:39.640 00:31:39.640 ' 00:31:39.640 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:39.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:39.902 06:42:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:48.048 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:48.048 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:48.048 Found net devices under 0000:31:00.0: cvl_0_0 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.048 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:48.049 Found net devices under 0000:31:00.1: cvl_0_1 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.049 06:43:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:31:48.049 00:31:48.049 --- 10.0.0.2 ping statistics --- 00:31:48.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.049 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:31:48.049 00:31:48.049 --- 10.0.0.1 ping statistics --- 00:31:48.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.049 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:48.049 06:43:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.355 Waiting for block devices as requested 00:31:51.355 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:51.355 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:51.355 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:51.355 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:51.355 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:51.355 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:51.355 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:51.616 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:51.616 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:51.877 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:51.877 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:52.138 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:52.138 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:52.138 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:52.399 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:52.399 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:52.399 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:52.660 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:52.921 No valid GPT data, bailing 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:52.921 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:31:52.922 00:31:52.922 Discovery Log Number of Records 2, Generation counter 2 00:31:52.922 =====Discovery Log Entry 0====== 00:31:52.922 trtype: tcp 00:31:52.922 adrfam: ipv4 00:31:52.922 subtype: current discovery subsystem 00:31:52.922 treq: not specified, sq flow control disable supported 00:31:52.922 portid: 1 00:31:52.922 trsvcid: 4420 00:31:52.922 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:52.922 traddr: 10.0.0.1 00:31:52.922 eflags: none 00:31:52.922 sectype: none 00:31:52.922 =====Discovery Log Entry 1====== 00:31:52.922 trtype: tcp 00:31:52.922 adrfam: ipv4 00:31:52.922 subtype: nvme subsystem 00:31:52.922 treq: not specified, sq flow control disable supported 00:31:52.922 portid: 1 00:31:52.922 trsvcid: 4420 00:31:52.922 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:52.922 traddr: 10.0.0.1 00:31:52.922 eflags: none 00:31:52.922 sectype: none 00:31:52.922 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:52.922 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:53.184 ===================================================== 00:31:53.184 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:53.184 ===================================================== 00:31:53.184 Controller Capabilities/Features 00:31:53.184 ================================ 00:31:53.184 Vendor ID: 0000 00:31:53.184 Subsystem Vendor ID: 0000 00:31:53.184 Serial Number: 986532f56292055bcb57 00:31:53.184 Model Number: Linux 00:31:53.184 Firmware Version: 6.8.9-20 00:31:53.184 Recommended Arb Burst: 0 00:31:53.184 IEEE OUI Identifier: 00 00 00 00:31:53.184 Multi-path I/O 00:31:53.184 May have multiple subsystem ports: No 00:31:53.184 May have multiple controllers: No 00:31:53.184 Associated with SR-IOV VF: No 00:31:53.184 Max Data Transfer Size: Unlimited 00:31:53.184 Max Number of Namespaces: 0 00:31:53.184 Max Number of I/O Queues: 1024 00:31:53.184 NVMe Specification Version (VS): 1.3 00:31:53.184 NVMe Specification Version (Identify): 1.3 00:31:53.184 Maximum Queue Entries: 1024 00:31:53.184 Contiguous Queues Required: No 00:31:53.184 Arbitration Mechanisms Supported 00:31:53.184 Weighted Round Robin: Not Supported 00:31:53.184 Vendor Specific: Not Supported 00:31:53.184 Reset Timeout: 7500 ms 00:31:53.184 Doorbell Stride: 4 bytes 00:31:53.184 NVM Subsystem Reset: Not Supported 00:31:53.184 Command Sets Supported 00:31:53.184 NVM Command Set: Supported 00:31:53.184 Boot Partition: Not Supported 00:31:53.184 Memory Page Size Minimum: 4096 bytes 00:31:53.184 Memory Page Size Maximum: 4096 bytes 00:31:53.184 Persistent Memory Region: Not Supported 00:31:53.184 Optional Asynchronous Events Supported 00:31:53.184 Namespace Attribute Notices: Not Supported 00:31:53.184 Firmware Activation Notices: Not Supported 00:31:53.184 ANA Change Notices: Not Supported 00:31:53.184 PLE Aggregate Log Change Notices: Not Supported 00:31:53.184 LBA Status Info Alert Notices: Not Supported 00:31:53.184 EGE Aggregate Log Change Notices: Not Supported 00:31:53.184 Normal NVM Subsystem Shutdown event: Not Supported 00:31:53.185 Zone Descriptor Change Notices: Not Supported 00:31:53.185 Discovery Log Change Notices: Supported 00:31:53.185 Controller Attributes 00:31:53.185 128-bit Host Identifier: Not Supported 00:31:53.185 Non-Operational Permissive Mode: Not Supported 00:31:53.185 NVM Sets: Not Supported 00:31:53.185 Read Recovery Levels: Not Supported 00:31:53.185 Endurance Groups: Not Supported 00:31:53.185 Predictable Latency Mode: Not Supported 00:31:53.185 Traffic Based Keep ALive: Not Supported 00:31:53.185 Namespace Granularity: Not Supported 00:31:53.185 SQ Associations: Not Supported 00:31:53.185 UUID List: Not Supported 00:31:53.185 Multi-Domain Subsystem: Not Supported 00:31:53.185 Fixed Capacity Management: Not Supported 00:31:53.185 Variable Capacity Management: Not Supported 00:31:53.185 Delete Endurance Group: Not Supported 00:31:53.185 Delete NVM Set: Not Supported 00:31:53.185 Extended LBA Formats Supported: Not Supported 00:31:53.185 Flexible Data Placement Supported: Not Supported 00:31:53.185 00:31:53.185 Controller Memory Buffer Support 00:31:53.185 ================================ 00:31:53.185 Supported: No 00:31:53.185 00:31:53.185 Persistent Memory Region Support 00:31:53.185 ================================ 00:31:53.185 Supported: No 00:31:53.185 00:31:53.185 Admin Command Set Attributes 00:31:53.185 ============================ 00:31:53.185 Security Send/Receive: Not Supported 00:31:53.185 Format NVM: Not Supported 00:31:53.185 Firmware Activate/Download: Not Supported 00:31:53.185 Namespace Management: Not Supported 00:31:53.185 Device Self-Test: Not Supported 00:31:53.185 Directives: Not Supported 00:31:53.185 NVMe-MI: Not Supported 00:31:53.185 Virtualization Management: Not Supported 00:31:53.185 Doorbell Buffer Config: Not Supported 00:31:53.185 Get LBA Status Capability: Not Supported 00:31:53.185 Command & Feature Lockdown Capability: Not Supported 00:31:53.185 Abort Command Limit: 1 00:31:53.185 Async Event Request Limit: 1 00:31:53.185 Number of Firmware Slots: N/A 00:31:53.185 Firmware Slot 1 Read-Only: N/A 00:31:53.185 Firmware Activation Without Reset: N/A 00:31:53.185 Multiple Update Detection Support: N/A 00:31:53.185 Firmware Update Granularity: No Information Provided 00:31:53.185 Per-Namespace SMART Log: No 00:31:53.185 Asymmetric Namespace Access Log Page: Not Supported 00:31:53.185 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:53.185 Command Effects Log Page: Not Supported 00:31:53.185 Get Log Page Extended Data: Supported 00:31:53.185 Telemetry Log Pages: Not Supported 00:31:53.185 Persistent Event Log Pages: Not Supported 00:31:53.185 Supported Log Pages Log Page: May Support 00:31:53.185 Commands Supported & Effects Log Page: Not Supported 00:31:53.185 Feature Identifiers & Effects Log Page:May Support 00:31:53.185 NVMe-MI Commands & Effects Log Page: May Support 00:31:53.185 Data Area 4 for Telemetry Log: Not Supported 00:31:53.185 Error Log Page Entries Supported: 1 00:31:53.185 Keep Alive: Not Supported 00:31:53.185 00:31:53.185 NVM Command Set Attributes 00:31:53.185 ========================== 00:31:53.185 Submission Queue Entry Size 00:31:53.185 Max: 1 00:31:53.185 Min: 1 00:31:53.185 Completion Queue Entry Size 00:31:53.185 Max: 1 00:31:53.185 Min: 1 00:31:53.185 Number of Namespaces: 0 00:31:53.185 Compare Command: Not Supported 00:31:53.185 Write Uncorrectable Command: Not Supported 00:31:53.185 Dataset Management Command: Not Supported 00:31:53.185 Write Zeroes Command: Not Supported 00:31:53.185 Set Features Save Field: Not Supported 00:31:53.185 Reservations: Not Supported 00:31:53.185 Timestamp: Not Supported 00:31:53.185 Copy: Not Supported 00:31:53.185 Volatile Write Cache: Not Present 00:31:53.185 Atomic Write Unit (Normal): 1 00:31:53.185 Atomic Write Unit (PFail): 1 00:31:53.185 Atomic Compare & Write Unit: 1 00:31:53.185 Fused Compare & Write: Not Supported 00:31:53.185 Scatter-Gather List 00:31:53.185 SGL Command Set: Supported 00:31:53.185 SGL Keyed: Not Supported 00:31:53.185 SGL Bit Bucket Descriptor: Not Supported 00:31:53.185 SGL Metadata Pointer: Not Supported 00:31:53.185 Oversized SGL: Not Supported 00:31:53.185 SGL Metadata Address: Not Supported 00:31:53.185 SGL Offset: Supported 00:31:53.185 Transport SGL Data Block: Not Supported 00:31:53.185 Replay Protected Memory Block: Not Supported 00:31:53.185 00:31:53.185 Firmware Slot Information 00:31:53.185 ========================= 00:31:53.185 Active slot: 0 00:31:53.185 00:31:53.185 00:31:53.185 Error Log 00:31:53.185 ========= 00:31:53.185 00:31:53.185 Active Namespaces 00:31:53.185 ================= 00:31:53.185 Discovery Log Page 00:31:53.185 ================== 00:31:53.185 Generation Counter: 2 00:31:53.185 Number of Records: 2 00:31:53.185 Record Format: 0 00:31:53.185 00:31:53.185 Discovery Log Entry 0 00:31:53.185 ---------------------- 00:31:53.185 Transport Type: 3 (TCP) 00:31:53.185 Address Family: 1 (IPv4) 00:31:53.185 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:53.185 Entry Flags: 00:31:53.185 Duplicate Returned Information: 0 00:31:53.185 Explicit Persistent Connection Support for Discovery: 0 00:31:53.185 Transport Requirements: 00:31:53.185 Secure Channel: Not Specified 00:31:53.185 Port ID: 1 (0x0001) 00:31:53.185 Controller ID: 65535 (0xffff) 00:31:53.185 Admin Max SQ Size: 32 00:31:53.185 Transport Service Identifier: 4420 00:31:53.185 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:53.185 Transport Address: 10.0.0.1 00:31:53.185 Discovery Log Entry 1 00:31:53.185 ---------------------- 00:31:53.185 Transport Type: 3 (TCP) 00:31:53.185 Address Family: 1 (IPv4) 00:31:53.185 Subsystem Type: 2 (NVM Subsystem) 00:31:53.185 Entry Flags: 00:31:53.185 Duplicate Returned Information: 0 00:31:53.185 Explicit Persistent Connection Support for Discovery: 0 00:31:53.185 Transport Requirements: 00:31:53.185 Secure Channel: Not Specified 00:31:53.185 Port ID: 1 (0x0001) 00:31:53.185 Controller ID: 65535 (0xffff) 00:31:53.185 Admin Max SQ Size: 32 00:31:53.185 Transport Service Identifier: 4420 00:31:53.185 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:53.185 Transport Address: 10.0.0.1 00:31:53.185 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:53.185 get_feature(0x01) failed 00:31:53.185 get_feature(0x02) failed 00:31:53.185 get_feature(0x04) failed 00:31:53.185 ===================================================== 00:31:53.185 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:53.185 ===================================================== 00:31:53.185 Controller Capabilities/Features 00:31:53.185 ================================ 00:31:53.185 Vendor ID: 0000 00:31:53.185 Subsystem Vendor ID: 0000 00:31:53.185 Serial Number: 34148555110f8f5e7eeb 00:31:53.185 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:53.185 Firmware Version: 6.8.9-20 00:31:53.185 Recommended Arb Burst: 6 00:31:53.185 IEEE OUI Identifier: 00 00 00 00:31:53.185 Multi-path I/O 00:31:53.185 May have multiple subsystem ports: Yes 00:31:53.185 May have multiple controllers: Yes 00:31:53.185 Associated with SR-IOV VF: No 00:31:53.185 Max Data Transfer Size: Unlimited 00:31:53.185 Max Number of Namespaces: 1024 00:31:53.185 Max Number of I/O Queues: 128 00:31:53.185 NVMe Specification Version (VS): 1.3 00:31:53.185 NVMe Specification Version (Identify): 1.3 00:31:53.185 Maximum Queue Entries: 1024 00:31:53.185 Contiguous Queues Required: No 00:31:53.185 Arbitration Mechanisms Supported 00:31:53.185 Weighted Round Robin: Not Supported 00:31:53.185 Vendor Specific: Not Supported 00:31:53.185 Reset Timeout: 7500 ms 00:31:53.185 Doorbell Stride: 4 bytes 00:31:53.185 NVM Subsystem Reset: Not Supported 00:31:53.185 Command Sets Supported 00:31:53.185 NVM Command Set: Supported 00:31:53.185 Boot Partition: Not Supported 00:31:53.185 Memory Page Size Minimum: 4096 bytes 00:31:53.185 Memory Page Size Maximum: 4096 bytes 00:31:53.185 Persistent Memory Region: Not Supported 00:31:53.185 Optional Asynchronous Events Supported 00:31:53.185 Namespace Attribute Notices: Supported 00:31:53.185 Firmware Activation Notices: Not Supported 00:31:53.185 ANA Change Notices: Supported 00:31:53.185 PLE Aggregate Log Change Notices: Not Supported 00:31:53.185 LBA Status Info Alert Notices: Not Supported 00:31:53.185 EGE Aggregate Log Change Notices: Not Supported 00:31:53.185 Normal NVM Subsystem Shutdown event: Not Supported 00:31:53.185 Zone Descriptor Change Notices: Not Supported 00:31:53.186 Discovery Log Change Notices: Not Supported 00:31:53.186 Controller Attributes 00:31:53.186 128-bit Host Identifier: Supported 00:31:53.186 Non-Operational Permissive Mode: Not Supported 00:31:53.186 NVM Sets: Not Supported 00:31:53.186 Read Recovery Levels: Not Supported 00:31:53.186 Endurance Groups: Not Supported 00:31:53.186 Predictable Latency Mode: Not Supported 00:31:53.186 Traffic Based Keep ALive: Supported 00:31:53.186 Namespace Granularity: Not Supported 00:31:53.186 SQ Associations: Not Supported 00:31:53.186 UUID List: Not Supported 00:31:53.186 Multi-Domain Subsystem: Not Supported 00:31:53.186 Fixed Capacity Management: Not Supported 00:31:53.186 Variable Capacity Management: Not Supported 00:31:53.186 Delete Endurance Group: Not Supported 00:31:53.186 Delete NVM Set: Not Supported 00:31:53.186 Extended LBA Formats Supported: Not Supported 00:31:53.186 Flexible Data Placement Supported: Not Supported 00:31:53.186 00:31:53.186 Controller Memory Buffer Support 00:31:53.186 ================================ 00:31:53.186 Supported: No 00:31:53.186 00:31:53.186 Persistent Memory Region Support 00:31:53.186 ================================ 00:31:53.186 Supported: No 00:31:53.186 00:31:53.186 Admin Command Set Attributes 00:31:53.186 ============================ 00:31:53.186 Security Send/Receive: Not Supported 00:31:53.186 Format NVM: Not Supported 00:31:53.186 Firmware Activate/Download: Not Supported 00:31:53.186 Namespace Management: Not Supported 00:31:53.186 Device Self-Test: Not Supported 00:31:53.186 Directives: Not Supported 00:31:53.186 NVMe-MI: Not Supported 00:31:53.186 Virtualization Management: Not Supported 00:31:53.186 Doorbell Buffer Config: Not Supported 00:31:53.186 Get LBA Status Capability: Not Supported 00:31:53.186 Command & Feature Lockdown Capability: Not Supported 00:31:53.186 Abort Command Limit: 4 00:31:53.186 Async Event Request Limit: 4 00:31:53.186 Number of Firmware Slots: N/A 00:31:53.186 Firmware Slot 1 Read-Only: N/A 00:31:53.186 Firmware Activation Without Reset: N/A 00:31:53.186 Multiple Update Detection Support: N/A 00:31:53.186 Firmware Update Granularity: No Information Provided 00:31:53.186 Per-Namespace SMART Log: Yes 00:31:53.186 Asymmetric Namespace Access Log Page: Supported 00:31:53.186 ANA Transition Time : 10 sec 00:31:53.186 00:31:53.186 Asymmetric Namespace Access Capabilities 00:31:53.186 ANA Optimized State : Supported 00:31:53.186 ANA Non-Optimized State : Supported 00:31:53.186 ANA Inaccessible State : Supported 00:31:53.186 ANA Persistent Loss State : Supported 00:31:53.186 ANA Change State : Supported 00:31:53.186 ANAGRPID is not changed : No 00:31:53.186 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:53.186 00:31:53.186 ANA Group Identifier Maximum : 128 00:31:53.186 Number of ANA Group Identifiers : 128 00:31:53.186 Max Number of Allowed Namespaces : 1024 00:31:53.186 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:53.186 Command Effects Log Page: Supported 00:31:53.186 Get Log Page Extended Data: Supported 00:31:53.186 Telemetry Log Pages: Not Supported 00:31:53.186 Persistent Event Log Pages: Not Supported 00:31:53.186 Supported Log Pages Log Page: May Support 00:31:53.186 Commands Supported & Effects Log Page: Not Supported 00:31:53.186 Feature Identifiers & Effects Log Page:May Support 00:31:53.186 NVMe-MI Commands & Effects Log Page: May Support 00:31:53.186 Data Area 4 for Telemetry Log: Not Supported 00:31:53.186 Error Log Page Entries Supported: 128 00:31:53.186 Keep Alive: Supported 00:31:53.186 Keep Alive Granularity: 1000 ms 00:31:53.186 00:31:53.186 NVM Command Set Attributes 00:31:53.186 ========================== 00:31:53.186 Submission Queue Entry Size 00:31:53.186 Max: 64 00:31:53.186 Min: 64 00:31:53.186 Completion Queue Entry Size 00:31:53.186 Max: 16 00:31:53.186 Min: 16 00:31:53.186 Number of Namespaces: 1024 00:31:53.186 Compare Command: Not Supported 00:31:53.186 Write Uncorrectable Command: Not Supported 00:31:53.186 Dataset Management Command: Supported 00:31:53.186 Write Zeroes Command: Supported 00:31:53.186 Set Features Save Field: Not Supported 00:31:53.186 Reservations: Not Supported 00:31:53.186 Timestamp: Not Supported 00:31:53.186 Copy: Not Supported 00:31:53.186 Volatile Write Cache: Present 00:31:53.186 Atomic Write Unit (Normal): 1 00:31:53.186 Atomic Write Unit (PFail): 1 00:31:53.186 Atomic Compare & Write Unit: 1 00:31:53.186 Fused Compare & Write: Not Supported 00:31:53.186 Scatter-Gather List 00:31:53.186 SGL Command Set: Supported 00:31:53.186 SGL Keyed: Not Supported 00:31:53.186 SGL Bit Bucket Descriptor: Not Supported 00:31:53.186 SGL Metadata Pointer: Not Supported 00:31:53.186 Oversized SGL: Not Supported 00:31:53.186 SGL Metadata Address: Not Supported 00:31:53.186 SGL Offset: Supported 00:31:53.186 Transport SGL Data Block: Not Supported 00:31:53.186 Replay Protected Memory Block: Not Supported 00:31:53.186 00:31:53.186 Firmware Slot Information 00:31:53.186 ========================= 00:31:53.186 Active slot: 0 00:31:53.186 00:31:53.186 Asymmetric Namespace Access 00:31:53.186 =========================== 00:31:53.186 Change Count : 0 00:31:53.186 Number of ANA Group Descriptors : 1 00:31:53.186 ANA Group Descriptor : 0 00:31:53.186 ANA Group ID : 1 00:31:53.186 Number of NSID Values : 1 00:31:53.186 Change Count : 0 00:31:53.186 ANA State : 1 00:31:53.186 Namespace Identifier : 1 00:31:53.186 00:31:53.186 Commands Supported and Effects 00:31:53.186 ============================== 00:31:53.186 Admin Commands 00:31:53.186 -------------- 00:31:53.186 Get Log Page (02h): Supported 00:31:53.186 Identify (06h): Supported 00:31:53.186 Abort (08h): Supported 00:31:53.186 Set Features (09h): Supported 00:31:53.186 Get Features (0Ah): Supported 00:31:53.186 Asynchronous Event Request (0Ch): Supported 00:31:53.186 Keep Alive (18h): Supported 00:31:53.186 I/O Commands 00:31:53.186 ------------ 00:31:53.186 Flush (00h): Supported 00:31:53.186 Write (01h): Supported LBA-Change 00:31:53.186 Read (02h): Supported 00:31:53.186 Write Zeroes (08h): Supported LBA-Change 00:31:53.186 Dataset Management (09h): Supported 00:31:53.186 00:31:53.186 Error Log 00:31:53.186 ========= 00:31:53.186 Entry: 0 00:31:53.186 Error Count: 0x3 00:31:53.186 Submission Queue Id: 0x0 00:31:53.186 Command Id: 0x5 00:31:53.186 Phase Bit: 0 00:31:53.186 Status Code: 0x2 00:31:53.186 Status Code Type: 0x0 00:31:53.186 Do Not Retry: 1 00:31:53.186 Error Location: 0x28 00:31:53.186 LBA: 0x0 00:31:53.186 Namespace: 0x0 00:31:53.186 Vendor Log Page: 0x0 00:31:53.186 ----------- 00:31:53.186 Entry: 1 00:31:53.186 Error Count: 0x2 00:31:53.186 Submission Queue Id: 0x0 00:31:53.186 Command Id: 0x5 00:31:53.186 Phase Bit: 0 00:31:53.186 Status Code: 0x2 00:31:53.186 Status Code Type: 0x0 00:31:53.186 Do Not Retry: 1 00:31:53.186 Error Location: 0x28 00:31:53.186 LBA: 0x0 00:31:53.186 Namespace: 0x0 00:31:53.186 Vendor Log Page: 0x0 00:31:53.186 ----------- 00:31:53.186 Entry: 2 00:31:53.186 Error Count: 0x1 00:31:53.186 Submission Queue Id: 0x0 00:31:53.186 Command Id: 0x4 00:31:53.186 Phase Bit: 0 00:31:53.186 Status Code: 0x2 00:31:53.186 Status Code Type: 0x0 00:31:53.186 Do Not Retry: 1 00:31:53.186 Error Location: 0x28 00:31:53.186 LBA: 0x0 00:31:53.186 Namespace: 0x0 00:31:53.186 Vendor Log Page: 0x0 00:31:53.186 00:31:53.186 Number of Queues 00:31:53.186 ================ 00:31:53.186 Number of I/O Submission Queues: 128 00:31:53.186 Number of I/O Completion Queues: 128 00:31:53.186 00:31:53.186 ZNS Specific Controller Data 00:31:53.186 ============================ 00:31:53.186 Zone Append Size Limit: 0 00:31:53.186 00:31:53.186 00:31:53.186 Active Namespaces 00:31:53.186 ================= 00:31:53.186 get_feature(0x05) failed 00:31:53.186 Namespace ID:1 00:31:53.186 Command Set Identifier: NVM (00h) 00:31:53.186 Deallocate: Supported 00:31:53.186 Deallocated/Unwritten Error: Not Supported 00:31:53.186 Deallocated Read Value: Unknown 00:31:53.186 Deallocate in Write Zeroes: Not Supported 00:31:53.186 Deallocated Guard Field: 0xFFFF 00:31:53.186 Flush: Supported 00:31:53.186 Reservation: Not Supported 00:31:53.186 Namespace Sharing Capabilities: Multiple Controllers 00:31:53.186 Size (in LBAs): 3750748848 (1788GiB) 00:31:53.186 Capacity (in LBAs): 3750748848 (1788GiB) 00:31:53.186 Utilization (in LBAs): 3750748848 (1788GiB) 00:31:53.187 UUID: d8821813-8f15-41c1-8e92-469b4bcfee1f 00:31:53.187 Thin Provisioning: Not Supported 00:31:53.187 Per-NS Atomic Units: Yes 00:31:53.187 Atomic Write Unit (Normal): 8 00:31:53.187 Atomic Write Unit (PFail): 8 00:31:53.187 Preferred Write Granularity: 8 00:31:53.187 Atomic Compare & Write Unit: 8 00:31:53.187 Atomic Boundary Size (Normal): 0 00:31:53.187 Atomic Boundary Size (PFail): 0 00:31:53.187 Atomic Boundary Offset: 0 00:31:53.187 NGUID/EUI64 Never Reused: No 00:31:53.187 ANA group ID: 1 00:31:53.187 Namespace Write Protected: No 00:31:53.187 Number of LBA Formats: 1 00:31:53.187 Current LBA Format: LBA Format #00 00:31:53.187 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:53.187 00:31:53.187 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:53.187 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:53.187 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:53.187 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:53.187 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:53.187 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:53.187 06:43:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:53.187 rmmod nvme_tcp 00:31:53.187 rmmod nvme_fabrics 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.187 06:43:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:55.742 06:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:59.046 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:59.046 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:59.619 00:31:59.619 real 0m19.966s 00:31:59.619 user 0m5.463s 00:31:59.619 sys 0m11.424s 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.619 ************************************ 00:31:59.619 END TEST nvmf_identify_kernel_target 00:31:59.619 ************************************ 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.619 ************************************ 00:31:59.619 START TEST nvmf_auth_host 00:31:59.619 ************************************ 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:59.619 * Looking for test storage... 00:31:59.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:59.619 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:59.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.880 --rc genhtml_branch_coverage=1 00:31:59.880 --rc genhtml_function_coverage=1 00:31:59.880 --rc genhtml_legend=1 00:31:59.880 --rc geninfo_all_blocks=1 00:31:59.880 --rc geninfo_unexecuted_blocks=1 00:31:59.880 00:31:59.880 ' 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:59.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.880 --rc genhtml_branch_coverage=1 00:31:59.880 --rc genhtml_function_coverage=1 00:31:59.880 --rc genhtml_legend=1 00:31:59.880 --rc geninfo_all_blocks=1 00:31:59.880 --rc geninfo_unexecuted_blocks=1 00:31:59.880 00:31:59.880 ' 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:59.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.880 --rc genhtml_branch_coverage=1 00:31:59.880 --rc genhtml_function_coverage=1 00:31:59.880 --rc genhtml_legend=1 00:31:59.880 --rc geninfo_all_blocks=1 00:31:59.880 --rc geninfo_unexecuted_blocks=1 00:31:59.880 00:31:59.880 ' 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:59.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.880 --rc genhtml_branch_coverage=1 00:31:59.880 --rc genhtml_function_coverage=1 00:31:59.880 --rc genhtml_legend=1 00:31:59.880 --rc geninfo_all_blocks=1 00:31:59.880 --rc geninfo_unexecuted_blocks=1 00:31:59.880 00:31:59.880 ' 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.880 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:59.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.881 06:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.028 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:08.029 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:08.029 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:08.029 Found net devices under 0000:31:00.0: cvl_0_0 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:08.029 Found net devices under 0000:31:00.1: cvl_0_1 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.029 06:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:32:08.029 00:32:08.029 --- 10.0.0.2 ping statistics --- 00:32:08.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.029 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:32:08.029 00:32:08.029 --- 10.0.0.1 ping statistics --- 00:32:08.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.029 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.029 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2869880 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2869880 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2869880 ']' 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:08.030 06:43:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.291 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:08.291 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:32:08.291 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:08.291 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.291 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b68a9570eaad117c6ea7e8f9fd1a32b0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.coX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b68a9570eaad117c6ea7e8f9fd1a32b0 0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b68a9570eaad117c6ea7e8f9fd1a32b0 0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b68a9570eaad117c6ea7e8f9fd1a32b0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.coX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.coX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.coX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ae2ed608f1839e45be9d3c16c2a5b49728fb4a7151f7f39e4201134ddd8e003 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gQR 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ae2ed608f1839e45be9d3c16c2a5b49728fb4a7151f7f39e4201134ddd8e003 3 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ae2ed608f1839e45be9d3c16c2a5b49728fb4a7151f7f39e4201134ddd8e003 3 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ae2ed608f1839e45be9d3c16c2a5b49728fb4a7151f7f39e4201134ddd8e003 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gQR 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gQR 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gQR 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=90af8fe1014c8668d7432d135450c768d3b972bffd86d245 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AUg 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 90af8fe1014c8668d7432d135450c768d3b972bffd86d245 0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 90af8fe1014c8668d7432d135450c768d3b972bffd86d245 0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=90af8fe1014c8668d7432d135450c768d3b972bffd86d245 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AUg 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AUg 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AUg 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5ff000a68fed195f6868ce821ef4dd504e89cd254d3ad4ac 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.x0n 00:32:08.553 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5ff000a68fed195f6868ce821ef4dd504e89cd254d3ad4ac 2 00:32:08.554 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5ff000a68fed195f6868ce821ef4dd504e89cd254d3ad4ac 2 00:32:08.554 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.554 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.554 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5ff000a68fed195f6868ce821ef4dd504e89cd254d3ad4ac 00:32:08.554 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:08.554 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.x0n 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.x0n 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.x0n 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0edc1bbd832c672357ef51f3e25d17e5 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0Wk 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0edc1bbd832c672357ef51f3e25d17e5 1 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0edc1bbd832c672357ef51f3e25d17e5 1 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0edc1bbd832c672357ef51f3e25d17e5 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0Wk 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0Wk 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0Wk 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:08.815 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f633850b3c9c1ff67337ba3ac13f4a01 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vuK 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f633850b3c9c1ff67337ba3ac13f4a01 1 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f633850b3c9c1ff67337ba3ac13f4a01 1 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f633850b3c9c1ff67337ba3ac13f4a01 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vuK 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vuK 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.vuK 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e5c0faa106ec594839a124723ec3ba691bb6f79ecad3a537 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RfP 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e5c0faa106ec594839a124723ec3ba691bb6f79ecad3a537 2 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e5c0faa106ec594839a124723ec3ba691bb6f79ecad3a537 2 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e5c0faa106ec594839a124723ec3ba691bb6f79ecad3a537 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RfP 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RfP 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RfP 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2e75b45561e8840db003930c40860983 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.teX 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2e75b45561e8840db003930c40860983 0 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2e75b45561e8840db003930c40860983 0 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2e75b45561e8840db003930c40860983 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:08.816 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.teX 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.teX 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.teX 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3eca6baebad638c3cec2e2675a60fe085d08b9a01cea9bb28a696052079c8850 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9Dw 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3eca6baebad638c3cec2e2675a60fe085d08b9a01cea9bb28a696052079c8850 3 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3eca6baebad638c3cec2e2675a60fe085d08b9a01cea9bb28a696052079c8850 3 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3eca6baebad638c3cec2e2675a60fe085d08b9a01cea9bb28a696052079c8850 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9Dw 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9Dw 00:32:09.077 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.9Dw 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2869880 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2869880 ']' 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:09.078 06:43:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.coX 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gQR ]] 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gQR 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AUg 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.x0n ]] 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.x0n 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0Wk 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.339 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.vuK ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vuK 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RfP 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.teX ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.teX 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.9Dw 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:09.340 06:43:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:12.644 Waiting for block devices as requested 00:32:12.906 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:12.906 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:12.906 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:13.166 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:13.166 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:13.166 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:13.427 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:13.427 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:13.427 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:13.688 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:13.688 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:13.688 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:13.949 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:13.949 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:13.949 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:13.949 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:14.211 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:15.155 No valid GPT data, bailing 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:32:15.155 00:32:15.155 Discovery Log Number of Records 2, Generation counter 2 00:32:15.155 =====Discovery Log Entry 0====== 00:32:15.155 trtype: tcp 00:32:15.155 adrfam: ipv4 00:32:15.155 subtype: current discovery subsystem 00:32:15.155 treq: not specified, sq flow control disable supported 00:32:15.155 portid: 1 00:32:15.155 trsvcid: 4420 00:32:15.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:15.155 traddr: 10.0.0.1 00:32:15.155 eflags: none 00:32:15.155 sectype: none 00:32:15.155 =====Discovery Log Entry 1====== 00:32:15.155 trtype: tcp 00:32:15.155 adrfam: ipv4 00:32:15.155 subtype: nvme subsystem 00:32:15.155 treq: not specified, sq flow control disable supported 00:32:15.155 portid: 1 00:32:15.155 trsvcid: 4420 00:32:15.155 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:15.155 traddr: 10.0.0.1 00:32:15.155 eflags: none 00:32:15.155 sectype: none 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.155 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.156 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.156 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.156 06:43:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.416 nvme0n1 00:32:15.416 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.416 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.416 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.416 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.417 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.678 nvme0n1 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.678 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.679 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.679 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.679 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.940 nvme0n1 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.940 nvme0n1 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.940 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.252 06:43:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.252 nvme0n1 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.252 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.253 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.551 nvme0n1 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.551 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.817 nvme0n1 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.817 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.078 nvme0n1 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.078 06:43:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.339 nvme0n1 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.339 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:17.340 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:17.340 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:17.340 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.340 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.340 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.601 nvme0n1 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.601 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.862 nvme0n1 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.862 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.123 nvme0n1 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.123 06:43:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:18.123 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:18.124 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.124 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.124 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.384 nvme0n1 00:32:18.384 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.384 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.384 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.384 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.384 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.645 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.906 nvme0n1 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.907 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.169 nvme0n1 00:32:19.169 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.169 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.169 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.169 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.169 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.169 06:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:19.169 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:19.170 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.170 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.430 nvme0n1 00:32:19.430 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.430 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.430 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.430 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.430 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.431 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.692 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.953 nvme0n1 00:32:19.953 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.953 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.953 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.953 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.953 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.953 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.214 06:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.475 nvme0n1 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.475 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.736 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.997 nvme0n1 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.997 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.998 06:43:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.568 nvme0n1 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.568 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:21.569 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:21.569 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:21.569 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.569 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.569 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.139 nvme0n1 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.139 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.140 06:43:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.711 nvme0n1 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.711 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.972 06:43:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.543 nvme0n1 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.543 06:43:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.114 nvme0n1 00:32:24.114 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.114 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.114 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.114 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.114 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.114 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:24.375 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.376 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:24.376 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:24.376 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:24.376 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:24.376 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.376 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.947 nvme0n1 00:32:24.947 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.947 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.947 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.948 06:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.891 nvme0n1 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.891 nvme0n1 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:25.891 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.892 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.153 nvme0n1 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.153 06:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.154 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.416 nvme0n1 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.416 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.677 nvme0n1 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.677 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.678 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.938 nvme0n1 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:26.938 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.939 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.199 nvme0n1 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:27.199 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.200 06:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.460 nvme0n1 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.460 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.461 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.723 nvme0n1 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.723 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.984 nvme0n1 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:27.984 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.985 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.245 nvme0n1 00:32:28.245 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.245 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.245 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.246 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.246 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.246 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.246 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.246 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.246 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.246 06:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.246 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.507 nvme0n1 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.507 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.768 nvme0n1 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.768 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.029 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.290 nvme0n1 00:32:29.290 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.290 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.290 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.290 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.290 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.290 06:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.290 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.552 nvme0n1 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.552 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.813 nvme0n1 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.813 06:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.384 nvme0n1 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.384 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.955 nvme0n1 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.955 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.956 06:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.217 nvme0n1 00:32:31.217 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.217 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.217 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.217 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.217 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:31.478 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.479 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.739 nvme0n1 00:32:31.739 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.739 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.739 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.739 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.739 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.739 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.000 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.001 06:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 nvme0n1 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:32.262 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:32.523 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:32.523 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.523 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.523 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.523 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:32.523 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.524 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 nvme0n1 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.094 06:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.666 nvme0n1 00:32:33.666 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.666 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.666 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.666 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.666 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.666 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.926 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.926 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.926 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.926 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.926 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.926 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.926 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.927 06:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.499 nvme0n1 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.499 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.440 nvme0n1 00:32:35.440 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.440 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.440 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.440 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.440 06:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.440 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 nvme0n1 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.011 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.272 nvme0n1 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.272 06:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.272 nvme0n1 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.272 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.533 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.533 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.533 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.533 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.533 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.534 nvme0n1 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.534 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:36.796 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.797 nvme0n1 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.797 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.059 nvme0n1 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:37.059 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:37.060 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.060 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.060 06:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.321 nvme0n1 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.321 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.583 nvme0n1 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.583 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.844 nvme0n1 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.844 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.106 nvme0n1 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.106 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.107 06:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.369 nvme0n1 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.369 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.629 nvme0n1 00:32:38.629 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.629 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.629 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.629 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.629 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.629 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:38.890 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.891 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.151 nvme0n1 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.151 06:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.411 nvme0n1 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:39.411 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.412 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.672 nvme0n1 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.672 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.933 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.194 nvme0n1 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.194 06:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.765 nvme0n1 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:40.765 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.766 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.026 nvme0n1 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:41.026 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.027 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.286 06:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.547 nvme0n1 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.547 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.117 nvme0n1 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.117 06:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.684 nvme0n1 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjY4YTk1NzBlYWFkMTE3YzZlYTdlOGY5ZmQxYTMyYjAPzFkn: 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFlMmVkNjA4ZjE4MzllNDViZTlkM2MxNmMyYTViNDk3MjhmYjRhNzE1MWY3ZjM5ZTQyMDExMzRkZGQ4ZTAwMy6Hoto=: 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.685 06:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.254 nvme0n1 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:43.254 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.255 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.514 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.085 nvme0n1 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.085 06:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.655 nvme0n1 00:32:44.655 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.655 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.655 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.655 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.655 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.655 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTVjMGZhYTEwNmVjNTk0ODM5YTEyNDcyM2VjM2JhNjkxYmI2Zjc5ZWNhZDNhNTM3hgKe2Q==: 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmU3NWI0NTU2MWU4ODQwZGIwMDM5MzBjNDA4NjA5ODNeiAgl: 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.916 06:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.486 nvme0n1 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2VjYTZiYWViYWQ2MzhjM2NlYzJlMjY3NWE2MGZlMDg1ZDA4YjlhMDFjZWE5YmIyOGE2OTYwNTIwNzljODg1MByuz48=: 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.486 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.428 nvme0n1 00:32:46.428 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.428 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.428 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.428 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.428 06:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.428 request: 00:32:46.428 { 00:32:46.428 "name": "nvme0", 00:32:46.428 "trtype": "tcp", 00:32:46.428 "traddr": "10.0.0.1", 00:32:46.428 "adrfam": "ipv4", 00:32:46.428 "trsvcid": "4420", 00:32:46.428 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:46.428 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:46.428 "prchk_reftag": false, 00:32:46.428 "prchk_guard": false, 00:32:46.428 "hdgst": false, 00:32:46.428 "ddgst": false, 00:32:46.428 "allow_unrecognized_csi": false, 00:32:46.428 "method": "bdev_nvme_attach_controller", 00:32:46.428 "req_id": 1 00:32:46.428 } 00:32:46.428 Got JSON-RPC error response 00:32:46.428 response: 00:32:46.428 { 00:32:46.428 "code": -5, 00:32:46.428 "message": "Input/output error" 00:32:46.428 } 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.428 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.429 request: 00:32:46.429 { 00:32:46.429 "name": "nvme0", 00:32:46.429 "trtype": "tcp", 00:32:46.429 "traddr": "10.0.0.1", 00:32:46.429 "adrfam": "ipv4", 00:32:46.429 "trsvcid": "4420", 00:32:46.429 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:46.429 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:46.429 "prchk_reftag": false, 00:32:46.429 "prchk_guard": false, 00:32:46.429 "hdgst": false, 00:32:46.429 "ddgst": false, 00:32:46.429 "dhchap_key": "key2", 00:32:46.429 "allow_unrecognized_csi": false, 00:32:46.429 "method": "bdev_nvme_attach_controller", 00:32:46.429 "req_id": 1 00:32:46.429 } 00:32:46.429 Got JSON-RPC error response 00:32:46.429 response: 00:32:46.429 { 00:32:46.429 "code": -5, 00:32:46.429 "message": "Input/output error" 00:32:46.429 } 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.429 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.690 request: 00:32:46.690 { 00:32:46.690 "name": "nvme0", 00:32:46.690 "trtype": "tcp", 00:32:46.690 "traddr": "10.0.0.1", 00:32:46.690 "adrfam": "ipv4", 00:32:46.690 "trsvcid": "4420", 00:32:46.690 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:46.690 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:46.690 "prchk_reftag": false, 00:32:46.690 "prchk_guard": false, 00:32:46.690 "hdgst": false, 00:32:46.690 "ddgst": false, 00:32:46.690 "dhchap_key": "key1", 00:32:46.690 "dhchap_ctrlr_key": "ckey2", 00:32:46.690 "allow_unrecognized_csi": false, 00:32:46.690 "method": "bdev_nvme_attach_controller", 00:32:46.690 "req_id": 1 00:32:46.690 } 00:32:46.690 Got JSON-RPC error response 00:32:46.690 response: 00:32:46.690 { 00:32:46.690 "code": -5, 00:32:46.690 "message": "Input/output error" 00:32:46.690 } 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.690 nvme0n1 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.690 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.950 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.950 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:46.950 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.951 request: 00:32:46.951 { 00:32:46.951 "name": "nvme0", 00:32:46.951 "dhchap_key": "key1", 00:32:46.951 "dhchap_ctrlr_key": "ckey2", 00:32:46.951 "method": "bdev_nvme_set_keys", 00:32:46.951 "req_id": 1 00:32:46.951 } 00:32:46.951 Got JSON-RPC error response 00:32:46.951 response: 00:32:46.951 { 00:32:46.951 "code": -13, 00:32:46.951 "message": "Permission denied" 00:32:46.951 } 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:46.951 06:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:47.891 06:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.891 06:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:47.891 06:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.891 06:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.891 06:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.151 06:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:48.151 06:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBhZjhmZTEwMTRjODY2OGQ3NDMyZDEzNTQ1MGM3NjhkM2I5NzJiZmZkODZkMjQ1SyHfvw==: 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: ]] 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWZmMDAwYTY4ZmVkMTk1ZjY4NjhjZTgyMWVmNGRkNTA0ZTg5Y2QyNTRkM2FkNGFjNv359A==: 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.094 06:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.354 nvme0n1 00:32:49.354 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.354 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:49.354 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.354 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.354 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGVkYzFiYmQ4MzJjNjcyMzU3ZWY1MWYzZTI1ZDE3ZTWmSpsP: 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: ]] 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjYzMzg1MGIzYzljMWZmNjczMzdiYTNhYzEzZjRhMDEf8xg/: 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.355 request: 00:32:49.355 { 00:32:49.355 "name": "nvme0", 00:32:49.355 "dhchap_key": "key2", 00:32:49.355 "dhchap_ctrlr_key": "ckey1", 00:32:49.355 "method": "bdev_nvme_set_keys", 00:32:49.355 "req_id": 1 00:32:49.355 } 00:32:49.355 Got JSON-RPC error response 00:32:49.355 response: 00:32:49.355 { 00:32:49.355 "code": -13, 00:32:49.355 "message": "Permission denied" 00:32:49.355 } 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:49.355 06:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.295 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.295 rmmod nvme_tcp 00:32:50.556 rmmod nvme_fabrics 00:32:50.556 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2869880 ']' 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2869880 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2869880 ']' 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2869880 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2869880 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2869880' 00:32:50.557 killing process with pid 2869880 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2869880 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2869880 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.557 06:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:53.104 06:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:56.465 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:56.465 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:57.053 06:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.coX /tmp/spdk.key-null.AUg /tmp/spdk.key-sha256.0Wk /tmp/spdk.key-sha384.RfP /tmp/spdk.key-sha512.9Dw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:57.053 06:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:00.356 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:33:00.356 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:00.356 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:00.928 00:33:00.928 real 1m1.189s 00:33:00.928 user 0m54.880s 00:33:00.928 sys 0m16.256s 00:33:00.928 06:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:00.928 06:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.928 ************************************ 00:33:00.928 END TEST nvmf_auth_host 00:33:00.928 ************************************ 00:33:00.928 06:44:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:33:00.928 06:44:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:00.928 06:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:00.928 06:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:00.928 06:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.928 ************************************ 00:33:00.928 START TEST nvmf_digest 00:33:00.928 ************************************ 00:33:00.929 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:00.929 * Looking for test storage... 00:33:00.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:00.929 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:00.929 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:33:00.929 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:01.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.190 --rc genhtml_branch_coverage=1 00:33:01.190 --rc genhtml_function_coverage=1 00:33:01.190 --rc genhtml_legend=1 00:33:01.190 --rc geninfo_all_blocks=1 00:33:01.190 --rc geninfo_unexecuted_blocks=1 00:33:01.190 00:33:01.190 ' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:01.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.190 --rc genhtml_branch_coverage=1 00:33:01.190 --rc genhtml_function_coverage=1 00:33:01.190 --rc genhtml_legend=1 00:33:01.190 --rc geninfo_all_blocks=1 00:33:01.190 --rc geninfo_unexecuted_blocks=1 00:33:01.190 00:33:01.190 ' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:01.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.190 --rc genhtml_branch_coverage=1 00:33:01.190 --rc genhtml_function_coverage=1 00:33:01.190 --rc genhtml_legend=1 00:33:01.190 --rc geninfo_all_blocks=1 00:33:01.190 --rc geninfo_unexecuted_blocks=1 00:33:01.190 00:33:01.190 ' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:01.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.190 --rc genhtml_branch_coverage=1 00:33:01.190 --rc genhtml_function_coverage=1 00:33:01.190 --rc genhtml_legend=1 00:33:01.190 --rc geninfo_all_blocks=1 00:33:01.190 --rc geninfo_unexecuted_blocks=1 00:33:01.190 00:33:01.190 ' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.190 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.191 06:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:09.329 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:09.329 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:09.329 Found net devices under 0000:31:00.0: cvl_0_0 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:09.329 Found net devices under 0000:31:00.1: cvl_0_1 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.329 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:33:09.330 00:33:09.330 --- 10.0.0.2 ping statistics --- 00:33:09.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.330 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:33:09.330 00:33:09.330 --- 10.0.0.1 ping statistics --- 00:33:09.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.330 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:09.330 ************************************ 00:33:09.330 START TEST nvmf_digest_clean 00:33:09.330 ************************************ 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2886935 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2886935 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2886935 ']' 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:09.330 06:44:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.330 [2024-11-20 06:44:28.585503] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:09.330 [2024-11-20 06:44:28.585564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.330 [2024-11-20 06:44:28.687306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.330 [2024-11-20 06:44:28.737617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.330 [2024-11-20 06:44:28.737668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.330 [2024-11-20 06:44:28.737677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.330 [2024-11-20 06:44:28.737684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.330 [2024-11-20 06:44:28.737690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.330 [2024-11-20 06:44:28.738463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.591 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.852 null0 00:33:09.852 [2024-11-20 06:44:29.554601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.852 [2024-11-20 06:44:29.578924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2887175 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2887175 /var/tmp/bperf.sock 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2887175 ']' 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:09.852 06:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.852 [2024-11-20 06:44:29.639206] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:09.852 [2024-11-20 06:44:29.639268] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887175 ] 00:33:09.852 [2024-11-20 06:44:29.731089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.113 [2024-11-20 06:44:29.782857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.684 06:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:10.684 06:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:33:10.684 06:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:10.684 06:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:10.684 06:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:10.946 06:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:10.946 06:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:11.208 nvme0n1 00:33:11.208 06:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:11.208 06:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:11.470 Running I/O for 2 seconds... 00:33:13.354 18983.00 IOPS, 74.15 MiB/s [2024-11-20T05:44:33.274Z] 19647.00 IOPS, 76.75 MiB/s 00:33:13.354 Latency(us) 00:33:13.354 [2024-11-20T05:44:33.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.354 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:13.354 nvme0n1 : 2.00 19676.41 76.86 0.00 0.00 6498.64 3044.69 17257.81 00:33:13.354 [2024-11-20T05:44:33.274Z] =================================================================================================================== 00:33:13.354 [2024-11-20T05:44:33.274Z] Total : 19676.41 76.86 0.00 0.00 6498.64 3044.69 17257.81 00:33:13.354 { 00:33:13.354 "results": [ 00:33:13.354 { 00:33:13.354 "job": "nvme0n1", 00:33:13.354 "core_mask": "0x2", 00:33:13.354 "workload": "randread", 00:33:13.354 "status": "finished", 00:33:13.354 "queue_depth": 128, 00:33:13.354 "io_size": 4096, 00:33:13.354 "runtime": 2.003516, 00:33:13.354 "iops": 19676.40887320091, 00:33:13.354 "mibps": 76.86097216094106, 00:33:13.354 "io_failed": 0, 00:33:13.354 "io_timeout": 0, 00:33:13.354 "avg_latency_us": 6498.638077553988, 00:33:13.354 "min_latency_us": 3044.693333333333, 00:33:13.354 "max_latency_us": 17257.81333333333 00:33:13.354 } 00:33:13.354 ], 00:33:13.354 "core_count": 1 00:33:13.354 } 00:33:13.354 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:13.354 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:13.354 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:13.354 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:13.354 | select(.opcode=="crc32c") 00:33:13.354 | "\(.module_name) \(.executed)"' 00:33:13.354 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2887175 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2887175 ']' 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2887175 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2887175 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2887175' 00:33:13.614 killing process with pid 2887175 00:33:13.614 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2887175 00:33:13.614 Received shutdown signal, test time was about 2.000000 seconds 00:33:13.614 00:33:13.614 Latency(us) 00:33:13.614 [2024-11-20T05:44:33.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.615 [2024-11-20T05:44:33.535Z] =================================================================================================================== 00:33:13.615 [2024-11-20T05:44:33.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:13.615 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2887175 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2887966 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2887966 /var/tmp/bperf.sock 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2887966 ']' 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:13.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:13.876 06:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:13.876 [2024-11-20 06:44:33.636878] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:13.876 [2024-11-20 06:44:33.636934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887966 ] 00:33:13.876 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:13.876 Zero copy mechanism will not be used. 00:33:13.876 [2024-11-20 06:44:33.719051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.876 [2024-11-20 06:44:33.748265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.816 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:14.816 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:33:14.816 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:14.816 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:14.816 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:14.817 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.817 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.077 nvme0n1 00:33:15.077 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:15.077 06:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:15.077 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:15.077 Zero copy mechanism will not be used. 00:33:15.077 Running I/O for 2 seconds... 00:33:17.402 3003.00 IOPS, 375.38 MiB/s [2024-11-20T05:44:37.322Z] 2935.50 IOPS, 366.94 MiB/s 00:33:17.402 Latency(us) 00:33:17.402 [2024-11-20T05:44:37.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.402 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:17.402 nvme0n1 : 2.00 2938.21 367.28 0.00 0.00 5442.58 1078.61 10649.60 00:33:17.402 [2024-11-20T05:44:37.322Z] =================================================================================================================== 00:33:17.402 [2024-11-20T05:44:37.322Z] Total : 2938.21 367.28 0.00 0.00 5442.58 1078.61 10649.60 00:33:17.402 { 00:33:17.402 "results": [ 00:33:17.402 { 00:33:17.402 "job": "nvme0n1", 00:33:17.402 "core_mask": "0x2", 00:33:17.402 "workload": "randread", 00:33:17.402 "status": "finished", 00:33:17.402 "queue_depth": 16, 00:33:17.402 "io_size": 131072, 00:33:17.402 "runtime": 2.003604, 00:33:17.402 "iops": 2938.205353952178, 00:33:17.402 "mibps": 367.27566924402225, 00:33:17.402 "io_failed": 0, 00:33:17.402 "io_timeout": 0, 00:33:17.402 "avg_latency_us": 5442.580003397316, 00:33:17.402 "min_latency_us": 1078.6133333333332, 00:33:17.402 "max_latency_us": 10649.6 00:33:17.402 } 00:33:17.402 ], 00:33:17.402 "core_count": 1 00:33:17.402 } 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:17.402 | select(.opcode=="crc32c") 00:33:17.402 | "\(.module_name) \(.executed)"' 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2887966 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2887966 ']' 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2887966 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2887966 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2887966' 00:33:17.402 killing process with pid 2887966 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2887966 00:33:17.402 Received shutdown signal, test time was about 2.000000 seconds 00:33:17.402 00:33:17.402 Latency(us) 00:33:17.402 [2024-11-20T05:44:37.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.402 [2024-11-20T05:44:37.322Z] =================================================================================================================== 00:33:17.402 [2024-11-20T05:44:37.322Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:17.402 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2887966 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2888652 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2888652 /var/tmp/bperf.sock 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2888652 ']' 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:17.663 06:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:17.663 [2024-11-20 06:44:37.402184] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:17.663 [2024-11-20 06:44:37.402239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888652 ] 00:33:17.663 [2024-11-20 06:44:37.486857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.663 [2024-11-20 06:44:37.516429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.604 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:18.604 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:33:18.604 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:18.604 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:18.604 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:18.604 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:18.604 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.176 nvme0n1 00:33:19.176 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:19.176 06:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.176 Running I/O for 2 seconds... 00:33:21.062 30196.00 IOPS, 117.95 MiB/s [2024-11-20T05:44:40.982Z] 30301.50 IOPS, 118.37 MiB/s 00:33:21.062 Latency(us) 00:33:21.062 [2024-11-20T05:44:40.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.062 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.062 nvme0n1 : 2.00 30314.86 118.42 0.00 0.00 4217.91 2157.23 14745.60 00:33:21.062 [2024-11-20T05:44:40.982Z] =================================================================================================================== 00:33:21.062 [2024-11-20T05:44:40.982Z] Total : 30314.86 118.42 0.00 0.00 4217.91 2157.23 14745.60 00:33:21.062 { 00:33:21.062 "results": [ 00:33:21.062 { 00:33:21.062 "job": "nvme0n1", 00:33:21.062 "core_mask": "0x2", 00:33:21.062 "workload": "randwrite", 00:33:21.062 "status": "finished", 00:33:21.062 "queue_depth": 128, 00:33:21.062 "io_size": 4096, 00:33:21.062 "runtime": 2.003341, 00:33:21.062 "iops": 30314.859027993738, 00:33:21.062 "mibps": 118.41741807810054, 00:33:21.062 "io_failed": 0, 00:33:21.062 "io_timeout": 0, 00:33:21.062 "avg_latency_us": 4217.9076684614665, 00:33:21.062 "min_latency_us": 2157.2266666666665, 00:33:21.062 "max_latency_us": 14745.6 00:33:21.062 } 00:33:21.062 ], 00:33:21.062 "core_count": 1 00:33:21.062 } 00:33:21.062 06:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:21.062 06:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:21.062 06:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:21.062 06:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:21.062 | select(.opcode=="crc32c") 00:33:21.062 | "\(.module_name) \(.executed)"' 00:33:21.062 06:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2888652 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2888652 ']' 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2888652 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2888652 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2888652' 00:33:21.323 killing process with pid 2888652 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2888652 00:33:21.323 Received shutdown signal, test time was about 2.000000 seconds 00:33:21.323 00:33:21.323 Latency(us) 00:33:21.323 [2024-11-20T05:44:41.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.323 [2024-11-20T05:44:41.243Z] =================================================================================================================== 00:33:21.323 [2024-11-20T05:44:41.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.323 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2888652 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2889339 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2889339 /var/tmp/bperf.sock 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2889339 ']' 00:33:21.583 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:21.584 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:21.584 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:21.584 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:21.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:21.584 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:21.584 06:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.584 [2024-11-20 06:44:41.358136] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:21.584 [2024-11-20 06:44:41.358192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889339 ] 00:33:21.584 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:21.584 Zero copy mechanism will not be used. 00:33:21.584 [2024-11-20 06:44:41.443287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.584 [2024-11-20 06:44:41.471097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.532 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:22.532 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:33:22.532 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:22.532 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:22.532 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.532 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.532 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.793 nvme0n1 00:33:22.793 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:22.793 06:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.052 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.052 Zero copy mechanism will not be used. 00:33:23.052 Running I/O for 2 seconds... 00:33:24.936 4471.00 IOPS, 558.88 MiB/s [2024-11-20T05:44:44.856Z] 5301.00 IOPS, 662.62 MiB/s 00:33:24.936 Latency(us) 00:33:24.936 [2024-11-20T05:44:44.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.936 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:24.936 nvme0n1 : 2.00 5303.16 662.90 0.00 0.00 3013.73 1208.32 14527.15 00:33:24.936 [2024-11-20T05:44:44.856Z] =================================================================================================================== 00:33:24.936 [2024-11-20T05:44:44.856Z] Total : 5303.16 662.90 0.00 0.00 3013.73 1208.32 14527.15 00:33:24.936 { 00:33:24.936 "results": [ 00:33:24.936 { 00:33:24.936 "job": "nvme0n1", 00:33:24.936 "core_mask": "0x2", 00:33:24.936 "workload": "randwrite", 00:33:24.936 "status": "finished", 00:33:24.936 "queue_depth": 16, 00:33:24.936 "io_size": 131072, 00:33:24.936 "runtime": 2.002201, 00:33:24.936 "iops": 5303.163868163087, 00:33:24.936 "mibps": 662.8954835203858, 00:33:24.936 "io_failed": 0, 00:33:24.936 "io_timeout": 0, 00:33:24.936 "avg_latency_us": 3013.7299830476545, 00:33:24.936 "min_latency_us": 1208.32, 00:33:24.936 "max_latency_us": 14527.146666666667 00:33:24.936 } 00:33:24.936 ], 00:33:24.936 "core_count": 1 00:33:24.936 } 00:33:24.936 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:24.936 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:24.936 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:24.936 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:24.936 | select(.opcode=="crc32c") 00:33:24.937 | "\(.module_name) \(.executed)"' 00:33:24.937 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2889339 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2889339 ']' 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2889339 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:25.198 06:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2889339 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2889339' 00:33:25.198 killing process with pid 2889339 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2889339 00:33:25.198 Received shutdown signal, test time was about 2.000000 seconds 00:33:25.198 00:33:25.198 Latency(us) 00:33:25.198 [2024-11-20T05:44:45.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.198 [2024-11-20T05:44:45.118Z] =================================================================================================================== 00:33:25.198 [2024-11-20T05:44:45.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2889339 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2886935 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2886935 ']' 00:33:25.198 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2886935 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2886935 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2886935' 00:33:25.460 killing process with pid 2886935 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2886935 00:33:25.460 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2886935 00:33:25.460 00:33:25.461 real 0m16.767s 00:33:25.461 user 0m33.059s 00:33:25.461 sys 0m3.820s 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.461 ************************************ 00:33:25.461 END TEST nvmf_digest_clean 00:33:25.461 ************************************ 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:25.461 ************************************ 00:33:25.461 START TEST nvmf_digest_error 00:33:25.461 ************************************ 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2890149 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2890149 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2890149 ']' 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.461 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:25.722 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.722 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:25.722 06:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:25.722 [2024-11-20 06:44:45.428644] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:25.722 [2024-11-20 06:44:45.428700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.722 [2024-11-20 06:44:45.523753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.722 [2024-11-20 06:44:45.561873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.722 [2024-11-20 06:44:45.561904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.722 [2024-11-20 06:44:45.561911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.722 [2024-11-20 06:44:45.561916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.722 [2024-11-20 06:44:45.561920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.722 [2024-11-20 06:44:45.562542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.662 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:26.662 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:33:26.662 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.662 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:26.662 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.662 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.663 [2024-11-20 06:44:46.268518] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.663 null0 00:33:26.663 [2024-11-20 06:44:46.347779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.663 [2024-11-20 06:44:46.371980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2890400 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2890400 /var/tmp/bperf.sock 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2890400 ']' 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:26.663 06:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.663 [2024-11-20 06:44:46.439369] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:26.663 [2024-11-20 06:44:46.439427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2890400 ] 00:33:26.663 [2024-11-20 06:44:46.522684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.663 [2024-11-20 06:44:46.552791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.605 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.606 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.866 nvme0n1 00:33:27.866 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:27.866 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.866 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.866 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.866 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:27.866 06:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:27.866 Running I/O for 2 seconds... 00:33:27.866 [2024-11-20 06:44:47.735258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:27.866 [2024-11-20 06:44:47.735292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.866 [2024-11-20 06:44:47.735302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.866 [2024-11-20 06:44:47.745407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:27.866 [2024-11-20 06:44:47.745428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.866 [2024-11-20 06:44:47.745436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.866 [2024-11-20 06:44:47.754532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:27.866 [2024-11-20 06:44:47.754550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.866 [2024-11-20 06:44:47.754556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.866 [2024-11-20 06:44:47.763597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:27.866 [2024-11-20 06:44:47.763615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.866 [2024-11-20 06:44:47.763622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.866 [2024-11-20 06:44:47.772677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:27.866 [2024-11-20 06:44:47.772695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.866 [2024-11-20 06:44:47.772702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:27.866 [2024-11-20 06:44:47.781733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:27.866 [2024-11-20 06:44:47.781755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:27.866 [2024-11-20 06:44:47.781762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.790103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.790120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.790127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.798771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.798788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.798795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.808937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.808954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.808960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.817950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.817967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.817974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.827110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.827127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.827133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.835244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.835261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.835271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.844367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.844384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.844391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.852995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.853012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.853019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.861645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.861662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.861668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.871113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.871131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.871137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.881524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.881542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.881548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.890796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.890814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.890820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.899831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.899848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.899854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.908020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.908037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.908043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.918093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.127 [2024-11-20 06:44:47.918113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.127 [2024-11-20 06:44:47.918119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.127 [2024-11-20 06:44:47.927068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.927085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.927091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.936696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.936712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.936719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.945289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.945306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.945312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.954017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.954034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.954040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.962616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.962633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.962639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.971150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.971167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.971173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.980474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.980491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.980497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.988764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.988781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.988787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:47.998885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:47.998902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:47.998908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:48.007587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:48.007605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:48.007611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:48.016508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:48.016524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:48.016531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:48.024877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:48.024893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:48.024900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.128 [2024-11-20 06:44:48.033679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.128 [2024-11-20 06:44:48.033696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.128 [2024-11-20 06:44:48.033702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.389 [2024-11-20 06:44:48.042892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.389 [2024-11-20 06:44:48.042910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.389 [2024-11-20 06:44:48.042917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.389 [2024-11-20 06:44:48.052064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.389 [2024-11-20 06:44:48.052081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.389 [2024-11-20 06:44:48.052087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.389 [2024-11-20 06:44:48.060402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.060419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.060425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.069643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.069660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.069670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.078720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.078737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.078743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.087341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.087357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.087364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.095877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.095894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.095900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.104860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.104876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.104882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.113314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.113332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.113338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.122848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.122865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.122872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.131827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.131844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.131850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.140701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.140718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.140724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.150290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.150307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.150313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.159016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.159033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.159040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.166981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.166998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.167004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.176889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.176905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.176912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.188011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.188028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.188034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.195270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.195287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.195293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.206043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.206059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.206065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.214965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.214982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.214989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.223907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.223924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.223933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.232629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.232646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.232652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.241757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.241774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.241780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.250710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.250727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.250733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.259380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.259397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.259404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.268688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.268705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.268711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.278232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.278249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.278255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.286368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.286385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.390 [2024-11-20 06:44:48.286392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.390 [2024-11-20 06:44:48.295354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.390 [2024-11-20 06:44:48.295371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.391 [2024-11-20 06:44:48.295377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.391 [2024-11-20 06:44:48.304689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.391 [2024-11-20 06:44:48.304709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.391 [2024-11-20 06:44:48.304716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.312330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.312347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.312353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.321367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.321384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.321390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.330982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.330999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.331005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.338754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.338770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.338777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.348641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.348658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.348664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.357796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.357813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.357820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.367406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.367423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.367429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.375071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.375088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.375094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.384453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.384470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.384477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.394382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.394399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.394405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.401990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.402007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.402013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.411164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.411181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.411187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.420948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.420966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.420972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.429469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.429486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.429493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.438410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.438427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.438433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.447415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.447432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.447439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.455911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.455928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.455938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.464709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.464727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.464734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.475071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.475088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.475094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.482786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.482803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.482809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.492044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.492061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.652 [2024-11-20 06:44:48.492067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.652 [2024-11-20 06:44:48.503330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.652 [2024-11-20 06:44:48.503347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.653 [2024-11-20 06:44:48.503353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.653 [2024-11-20 06:44:48.513655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.653 [2024-11-20 06:44:48.513673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.653 [2024-11-20 06:44:48.513679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.653 [2024-11-20 06:44:48.522178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.653 [2024-11-20 06:44:48.522195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.653 [2024-11-20 06:44:48.522201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.653 [2024-11-20 06:44:48.532335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.653 [2024-11-20 06:44:48.532351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.653 [2024-11-20 06:44:48.532358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.653 [2024-11-20 06:44:48.540653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.653 [2024-11-20 06:44:48.540670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.653 [2024-11-20 06:44:48.540677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.653 [2024-11-20 06:44:48.553400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.653 [2024-11-20 06:44:48.553418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.653 [2024-11-20 06:44:48.553424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.653 [2024-11-20 06:44:48.564879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.653 [2024-11-20 06:44:48.564896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.653 [2024-11-20 06:44:48.564902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.572348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.572365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.572372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.582092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.582110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.582116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.590877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.590894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.590900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.600478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.600495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.600501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.610224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.610241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.610247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.620131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.620147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.620157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.628803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.628819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.628825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.641141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.641158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.641164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.652035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.652053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.652059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.660415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.660431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.660438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.669801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.669818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.669824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.678724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.678741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.678752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.687209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.687225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.687231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.695782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.695799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.695805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.704716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.704736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.704743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.713749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.713766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.713772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 27669.00 IOPS, 108.08 MiB/s [2024-11-20T05:44:48.834Z] [2024-11-20 06:44:48.722012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.722029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.722036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.732042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.732059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.732065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.741050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.741067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.741073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.750241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.750257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.750264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.758781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.758798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.758805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.768092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.768109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.768115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.775895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.775912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.775918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.784697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.914 [2024-11-20 06:44:48.784715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.914 [2024-11-20 06:44:48.784721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.914 [2024-11-20 06:44:48.794498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.915 [2024-11-20 06:44:48.794515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.915 [2024-11-20 06:44:48.794521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.915 [2024-11-20 06:44:48.803765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.915 [2024-11-20 06:44:48.803782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.915 [2024-11-20 06:44:48.803789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.915 [2024-11-20 06:44:48.811703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.915 [2024-11-20 06:44:48.811720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.915 [2024-11-20 06:44:48.811726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.915 [2024-11-20 06:44:48.820653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:28.915 [2024-11-20 06:44:48.820670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.915 [2024-11-20 06:44:48.820677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.831352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.831370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.831376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.843859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.843877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.843883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.851275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.851292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.851298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.861296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.861313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.861322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.871258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.871275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.871282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.880245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.880262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.880268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.887812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.887828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.887835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.897683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.897700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.897707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.906850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.906867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.906873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.915676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.915693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.915699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.923463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.923480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.923487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.934190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.934207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.934214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.946685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.946702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.946708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.955395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.955412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.955419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.963943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.963960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.963967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.973742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.973765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.973771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.982224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.176 [2024-11-20 06:44:48.982241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.176 [2024-11-20 06:44:48.982247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.176 [2024-11-20 06:44:48.990964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:48.990981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:48.990987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:48.999666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:48.999683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:48.999689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.008383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.008400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.008407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.017939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.017956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.017965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.026232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.026250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.026256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.034864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.034881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.034888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.043736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.043765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.043771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.053686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.053704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.053710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.062881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.062899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.062905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.072636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.072653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.072659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.081713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.081730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.081736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.177 [2024-11-20 06:44:49.091409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.177 [2024-11-20 06:44:49.091427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.177 [2024-11-20 06:44:49.091433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.444 [2024-11-20 06:44:49.100290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.444 [2024-11-20 06:44:49.100311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.444 [2024-11-20 06:44:49.100317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.444 [2024-11-20 06:44:49.108817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.444 [2024-11-20 06:44:49.108834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.444 [2024-11-20 06:44:49.108841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.444 [2024-11-20 06:44:49.117882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.444 [2024-11-20 06:44:49.117899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.444 [2024-11-20 06:44:49.117906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.444 [2024-11-20 06:44:49.126587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.444 [2024-11-20 06:44:49.126603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.444 [2024-11-20 06:44:49.126610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.444 [2024-11-20 06:44:49.135627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.444 [2024-11-20 06:44:49.135645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.444 [2024-11-20 06:44:49.135651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.444 [2024-11-20 06:44:49.144765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.444 [2024-11-20 06:44:49.144783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.445 [2024-11-20 06:44:49.144790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.445 [2024-11-20 06:44:49.153483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.445 [2024-11-20 06:44:49.153500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.445 [2024-11-20 06:44:49.153506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.445 [2024-11-20 06:44:49.162176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.445 [2024-11-20 06:44:49.162193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.445 [2024-11-20 06:44:49.162199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.445 [2024-11-20 06:44:49.170794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.445 [2024-11-20 06:44:49.170811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.445 [2024-11-20 06:44:49.170818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.445 [2024-11-20 06:44:49.180124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.445 [2024-11-20 06:44:49.180141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.445 [2024-11-20 06:44:49.180147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.445 [2024-11-20 06:44:49.189886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.445 [2024-11-20 06:44:49.189903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.445 [2024-11-20 06:44:49.189909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.446 [2024-11-20 06:44:49.198790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.446 [2024-11-20 06:44:49.198807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.446 [2024-11-20 06:44:49.198814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.446 [2024-11-20 06:44:49.207899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.446 [2024-11-20 06:44:49.207916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.446 [2024-11-20 06:44:49.207923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.446 [2024-11-20 06:44:49.217079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.446 [2024-11-20 06:44:49.217096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.446 [2024-11-20 06:44:49.217102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.446 [2024-11-20 06:44:49.225638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.446 [2024-11-20 06:44:49.225656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.446 [2024-11-20 06:44:49.225662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.446 [2024-11-20 06:44:49.235344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.446 [2024-11-20 06:44:49.235361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.446 [2024-11-20 06:44:49.235367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.446 [2024-11-20 06:44:49.244094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.446 [2024-11-20 06:44:49.244111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.446 [2024-11-20 06:44:49.244117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.446 [2024-11-20 06:44:49.252112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.446 [2024-11-20 06:44:49.252129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.446 [2024-11-20 06:44:49.252138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.261397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.261414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.261421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.269956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.269974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.269981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.279008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.279025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.279031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.288662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.288679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.288686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.298636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.298654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.298660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.308427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.308444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.308451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.317524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.317542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.317548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.325854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.447 [2024-11-20 06:44:49.325871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.447 [2024-11-20 06:44:49.325878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.447 [2024-11-20 06:44:49.335647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.448 [2024-11-20 06:44:49.335668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.448 [2024-11-20 06:44:49.335674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.448 [2024-11-20 06:44:49.346782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.448 [2024-11-20 06:44:49.346799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.448 [2024-11-20 06:44:49.346806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.448 [2024-11-20 06:44:49.355394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.448 [2024-11-20 06:44:49.355411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.448 [2024-11-20 06:44:49.355417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.364243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.364261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.364267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.373929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.373946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.373952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.382457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.382474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.382481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.391120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.391136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.391143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.399890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.399908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.399914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.408627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.408645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.408655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.417638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.417655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.417661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.426417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.714 [2024-11-20 06:44:49.426434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.714 [2024-11-20 06:44:49.426440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.714 [2024-11-20 06:44:49.435667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.435684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.435690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.443972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.443989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.443995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.453587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.453604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.453610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.461770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.461787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.461793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.471878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.471894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.471901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.481185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.481201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.481208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.490032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.490052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.490058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.498553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.498570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.498577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.507752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.507769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.507775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.515919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.515944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.515951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.524380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.524397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.524404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.533579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.533596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.533602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.544823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.544841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.544847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.553959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.553976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.553982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.563041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.563058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.563064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.572147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.572164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.572170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.581180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.581197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.581203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.589836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.589852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.589858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.598681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.598697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.598704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.607138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.607155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.607161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.616264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.616281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.616286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.715 [2024-11-20 06:44:49.624696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.715 [2024-11-20 06:44:49.624712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.715 [2024-11-20 06:44:49.624718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.634468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.634486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.634492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.642345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.642362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.642372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.651220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.651237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.651243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.660737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.660758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.660764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.672066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.672089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.684217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.684234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.684241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.694652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.694668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.694674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.703597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.703613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.703619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 [2024-11-20 06:44:49.713817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.713834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.713840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 27763.00 IOPS, 108.45 MiB/s [2024-11-20T05:44:49.896Z] [2024-11-20 06:44:49.722965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ff41c0) 00:33:29.976 [2024-11-20 06:44:49.722981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.976 [2024-11-20 06:44:49.722988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.976 00:33:29.976 Latency(us) 00:33:29.976 [2024-11-20T05:44:49.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.976 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:29.976 nvme0n1 : 2.00 27782.83 108.53 0.00 0.00 4602.62 2293.76 15510.19 00:33:29.976 [2024-11-20T05:44:49.896Z] =================================================================================================================== 00:33:29.976 [2024-11-20T05:44:49.896Z] Total : 27782.83 108.53 0.00 0.00 4602.62 2293.76 15510.19 00:33:29.976 { 00:33:29.976 "results": [ 00:33:29.976 { 00:33:29.976 "job": "nvme0n1", 00:33:29.976 "core_mask": "0x2", 00:33:29.976 "workload": "randread", 00:33:29.976 "status": "finished", 00:33:29.976 "queue_depth": 128, 00:33:29.976 "io_size": 4096, 00:33:29.976 "runtime": 2.00318, 00:33:29.976 "iops": 27782.82530776066, 00:33:29.976 "mibps": 108.52666135844008, 00:33:29.976 "io_failed": 0, 00:33:29.976 "io_timeout": 0, 00:33:29.976 "avg_latency_us": 4602.618544579006, 00:33:29.976 "min_latency_us": 2293.76, 00:33:29.976 "max_latency_us": 15510.186666666666 00:33:29.976 } 00:33:29.976 ], 00:33:29.976 "core_count": 1 00:33:29.976 } 00:33:29.976 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:29.976 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:29.976 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:29.976 | .driver_specific 00:33:29.976 | .nvme_error 00:33:29.976 | .status_code 00:33:29.976 | .command_transient_transport_error' 00:33:29.976 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:30.238 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:33:30.238 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2890400 00:33:30.238 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2890400 ']' 00:33:30.238 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2890400 00:33:30.238 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:30.238 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:30.238 06:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2890400 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2890400' 00:33:30.238 killing process with pid 2890400 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2890400 00:33:30.238 Received shutdown signal, test time was about 2.000000 seconds 00:33:30.238 00:33:30.238 Latency(us) 00:33:30.238 [2024-11-20T05:44:50.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.238 [2024-11-20T05:44:50.158Z] =================================================================================================================== 00:33:30.238 [2024-11-20T05:44:50.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2890400 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2891081 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2891081 /var/tmp/bperf.sock 00:33:30.238 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2891081 ']' 00:33:30.239 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:30.239 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.239 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:30.239 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.239 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:30.239 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.499 [2024-11-20 06:44:50.174810] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:30.499 [2024-11-20 06:44:50.174867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891081 ] 00:33:30.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.499 Zero copy mechanism will not be used. 00:33:30.499 [2024-11-20 06:44:50.260638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.499 [2024-11-20 06:44:50.290501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.071 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:31.071 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:33:31.071 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:31.071 06:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:31.331 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:31.331 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.331 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.331 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.331 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.331 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.591 nvme0n1 00:33:31.591 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:31.591 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.591 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.591 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.591 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:31.591 06:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.591 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:31.591 Zero copy mechanism will not be used. 00:33:31.591 Running I/O for 2 seconds... 00:33:31.592 [2024-11-20 06:44:51.472908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.592 [2024-11-20 06:44:51.472942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.592 [2024-11-20 06:44:51.472952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:31.592 [2024-11-20 06:44:51.483975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.592 [2024-11-20 06:44:51.483997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.592 [2024-11-20 06:44:51.484004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:31.592 [2024-11-20 06:44:51.494911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.592 [2024-11-20 06:44:51.494931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.592 [2024-11-20 06:44:51.494938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:31.592 [2024-11-20 06:44:51.506690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.592 [2024-11-20 06:44:51.506709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.592 [2024-11-20 06:44:51.506716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.517229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.517248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.517255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.529520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.529538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.529544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.541771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.541788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.541795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.550839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.550858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.550869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.561523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.561540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.561547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.570750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.570767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.570774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.582604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.582622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.582628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.594399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.594418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.594424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.604288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.604306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.604312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.613410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.853 [2024-11-20 06:44:51.613428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.853 [2024-11-20 06:44:51.613434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:31.853 [2024-11-20 06:44:51.624130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.624148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.624154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.635034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.635053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.635059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.644768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.644789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.644796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.656683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.656701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.656708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.667046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.667065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.667071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.677245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.677263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.677270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.689189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.689207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.689214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.701043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.701061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.701067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.712472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.712491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.712497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.721380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.721398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.721404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.730752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.730771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.730777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.741833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.741851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.741857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.749521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.749539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.749546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:31.854 [2024-11-20 06:44:51.758204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:31.854 [2024-11-20 06:44:51.758222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.854 [2024-11-20 06:44:51.758229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.116 [2024-11-20 06:44:51.771207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.116 [2024-11-20 06:44:51.771226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.116 [2024-11-20 06:44:51.771232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.116 [2024-11-20 06:44:51.782880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.116 [2024-11-20 06:44:51.782898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.116 [2024-11-20 06:44:51.782904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.116 [2024-11-20 06:44:51.793668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.116 [2024-11-20 06:44:51.793686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.116 [2024-11-20 06:44:51.793692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.116 [2024-11-20 06:44:51.804500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.116 [2024-11-20 06:44:51.804518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.116 [2024-11-20 06:44:51.804524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.116 [2024-11-20 06:44:51.815894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.815912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.815918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.826809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.826827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.826837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.832628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.832645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.832651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.841664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.841682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.841688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.850613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.850630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.850637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.859004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.859021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.859028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.870128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.870146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.870152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.880328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.880346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.880353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.891518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.891536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.891542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.900483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.900501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.900507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.909341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.909362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.909368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.917604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.917622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.917628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.926727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.926749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.926756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.934035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.934053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.934059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.943587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.943605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.943611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.954566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.954584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.954590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.964794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.964812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.964818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.975341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.975358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.975364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.986141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.986159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.986165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:51.994379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:51.994398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:51.994404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:52.006877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:52.006894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:52.006901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:52.015673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:52.015692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:52.015698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.117 [2024-11-20 06:44:52.027116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.117 [2024-11-20 06:44:52.027133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.117 [2024-11-20 06:44:52.027140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.039393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.039411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.039417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.052033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.052057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.061740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.061761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.061768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.071874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.071890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.071897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.082504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.082522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.082532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.093164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.093182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.093188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.102245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.102263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.102269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.113060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.113078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.379 [2024-11-20 06:44:52.113084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.379 [2024-11-20 06:44:52.123918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.379 [2024-11-20 06:44:52.123936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.123942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.135907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.135926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.135932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.148386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.148404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.148410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.161408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.161426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.161432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.173562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.173580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.173586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.186782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.186800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.186806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.199297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.199315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.199321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.211324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.211343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.211349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.223008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.223027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.223033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.232951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.232969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.232976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.242666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.242684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.242691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.251226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.251245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.251251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.261488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.261506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.261513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.273180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.273199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.273208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.380 [2024-11-20 06:44:52.285676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.380 [2024-11-20 06:44:52.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.380 [2024-11-20 06:44:52.285700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.641 [2024-11-20 06:44:52.296920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.641 [2024-11-20 06:44:52.296938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.641 [2024-11-20 06:44:52.296945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.641 [2024-11-20 06:44:52.308872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.641 [2024-11-20 06:44:52.308889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.641 [2024-11-20 06:44:52.308895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.641 [2024-11-20 06:44:52.320456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.641 [2024-11-20 06:44:52.320475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.641 [2024-11-20 06:44:52.320481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.641 [2024-11-20 06:44:52.331910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.641 [2024-11-20 06:44:52.331928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.641 [2024-11-20 06:44:52.331935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.641 [2024-11-20 06:44:52.344107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.641 [2024-11-20 06:44:52.344125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.641 [2024-11-20 06:44:52.344132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.641 [2024-11-20 06:44:52.355643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.641 [2024-11-20 06:44:52.355661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.641 [2024-11-20 06:44:52.355668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.641 [2024-11-20 06:44:52.367943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.641 [2024-11-20 06:44:52.367962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.367968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.380587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.380610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.380616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.391292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.391310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.391317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.403213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.403231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.403237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.414942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.414960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.414966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.427242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.427261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.427267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.438490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.438508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.438515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.450124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.450142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.450149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.461534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.461552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.461558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.642 2873.00 IOPS, 359.12 MiB/s [2024-11-20T05:44:52.562Z] [2024-11-20 06:44:52.472798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.472817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.472823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.484541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.484559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.484566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.495551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.495569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.495576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.506813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.506832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.506838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.517301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.517319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.517325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.526973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.526991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.526998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.536062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.536081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.536087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.642 [2024-11-20 06:44:52.547961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.642 [2024-11-20 06:44:52.547979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.642 [2024-11-20 06:44:52.547985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.903 [2024-11-20 06:44:52.558280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.903 [2024-11-20 06:44:52.558298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.903 [2024-11-20 06:44:52.558305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.903 [2024-11-20 06:44:52.568094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.903 [2024-11-20 06:44:52.568112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.568122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.578945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.578963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.578969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.588777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.588795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.588802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.598036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.598055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.598061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.607155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.607173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.607179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.619060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.619079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.619085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.629497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.629515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.629522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.640189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.640207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.640214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.646926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.646945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.646951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.657689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.657708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.657714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.668271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.668290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.668296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.677707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.677725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.677731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.689453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.689471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.689478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.699342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.699360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.699366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.709159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.709177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.709183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.721055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.721074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.721080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.732986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.733004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.733010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.743604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.743621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.743630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.753571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.753589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.753596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.757876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.757895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.757902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.768088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.768106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.768113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.778816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.778835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.778841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.790548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.790567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.790574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.800911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.800929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.800935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:32.904 [2024-11-20 06:44:52.811681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:32.904 [2024-11-20 06:44:52.811699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.904 [2024-11-20 06:44:52.811706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.165 [2024-11-20 06:44:52.822989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.165 [2024-11-20 06:44:52.823007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.165 [2024-11-20 06:44:52.823014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.165 [2024-11-20 06:44:52.834087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.165 [2024-11-20 06:44:52.834109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.165 [2024-11-20 06:44:52.834115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.165 [2024-11-20 06:44:52.844887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.165 [2024-11-20 06:44:52.844906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.165 [2024-11-20 06:44:52.844912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.165 [2024-11-20 06:44:52.856802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.165 [2024-11-20 06:44:52.856820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.165 [2024-11-20 06:44:52.856827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.165 [2024-11-20 06:44:52.866806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.165 [2024-11-20 06:44:52.866824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.165 [2024-11-20 06:44:52.866831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.165 [2024-11-20 06:44:52.876784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.876801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.876808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.888898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.888916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.888922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.897233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.897251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.897257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.904549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.904567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.904573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.916172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.916191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.916197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.927435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.927453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.927460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.938723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.938741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.938753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.948787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.948805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.948811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.955712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.955729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.955736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.964196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.964213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.964220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.972220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.972237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.972244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.981416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.981433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.981439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:52.991664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:52.991681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:52.991687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.000462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.000479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.000488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.010940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.010958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.010964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.014067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.014085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.014091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.023771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.023789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.023795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.034353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.034371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.034377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.043518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.043535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.043541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.046918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.046936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.046943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.051653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.051671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.051677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.056610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.056628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.056634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.064488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.064512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.064518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.166 [2024-11-20 06:44:53.075690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.166 [2024-11-20 06:44:53.075709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.166 [2024-11-20 06:44:53.075716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.084918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.084937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.084943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.093962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.093980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.093987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.104388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.104406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.104413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.111823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.111841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.111848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.121597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.121615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.121621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.126721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.126740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.126752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.138666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.138684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.138691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.147813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.147831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.147837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.158391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.158409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.158415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.164470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.164488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.164494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.170688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.170706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.170712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.177227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.177245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.177251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.185471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.185489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.185495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.188347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.188363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.188370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.195291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.195309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.195316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.201648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.201666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.201675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.209890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.209908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.209914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.215933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.215950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.215956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.225470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.225487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.428 [2024-11-20 06:44:53.225493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.428 [2024-11-20 06:44:53.232789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.428 [2024-11-20 06:44:53.232807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.232813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.241503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.241521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.241527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.249513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.249531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.249537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.254057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.254075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.254081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.258965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.258982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.258988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.268484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.268501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.268508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.275849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.275867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.275873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.284514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.284532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.284538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.295323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.295341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.295348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.305524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.305541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.305548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.313901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.313919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.313926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.319757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.319775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.319781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.330907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.330925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.330931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.429 [2024-11-20 06:44:53.342545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.429 [2024-11-20 06:44:53.342563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.429 [2024-11-20 06:44:53.342572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.346427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.346445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.346451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.350451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.350469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.350475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.358116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.358134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.358141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.363778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.363802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.363809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.373530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.373548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.373554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.382559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.382577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.382583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.392155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.392173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.392179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.402761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.402778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.402784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.410253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.410275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.410281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.417474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.417492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.417498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.422762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.422780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.422786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.432927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.432945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.432951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.443581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.443599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.443605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.448705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.448723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.448730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.453357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.453375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.453381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.457788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.457806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.457812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:33.690 [2024-11-20 06:44:53.467368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22caa60) 00:33:33.690 [2024-11-20 06:44:53.467386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.690 [2024-11-20 06:44:53.467392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:33.690 3183.00 IOPS, 397.88 MiB/s 00:33:33.690 Latency(us) 00:33:33.690 [2024-11-20T05:44:53.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.690 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:33.690 nvme0n1 : 2.00 3188.27 398.53 0.00 0.00 5015.65 638.29 12943.36 00:33:33.690 [2024-11-20T05:44:53.610Z] =================================================================================================================== 00:33:33.690 [2024-11-20T05:44:53.610Z] Total : 3188.27 398.53 0.00 0.00 5015.65 638.29 12943.36 00:33:33.690 { 00:33:33.690 "results": [ 00:33:33.690 { 00:33:33.690 "job": "nvme0n1", 00:33:33.690 "core_mask": "0x2", 00:33:33.690 "workload": "randread", 00:33:33.690 "status": "finished", 00:33:33.690 "queue_depth": 16, 00:33:33.690 "io_size": 131072, 00:33:33.690 "runtime": 2.001711, 00:33:33.690 "iops": 3188.272432933625, 00:33:33.690 "mibps": 398.53405411670315, 00:33:33.690 "io_failed": 0, 00:33:33.690 "io_timeout": 0, 00:33:33.690 "avg_latency_us": 5015.648645147811, 00:33:33.690 "min_latency_us": 638.2933333333333, 00:33:33.690 "max_latency_us": 12943.36 00:33:33.690 } 00:33:33.690 ], 00:33:33.690 "core_count": 1 00:33:33.690 } 00:33:33.690 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:33.690 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:33.690 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:33.690 | .driver_specific 00:33:33.690 | .nvme_error 00:33:33.690 | .status_code 00:33:33.690 | .command_transient_transport_error' 00:33:33.690 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:33.950 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:33:33.950 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2891081 00:33:33.950 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2891081 ']' 00:33:33.950 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2891081 00:33:33.950 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:33.950 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:33.950 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2891081 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2891081' 00:33:33.951 killing process with pid 2891081 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2891081 00:33:33.951 Received shutdown signal, test time was about 2.000000 seconds 00:33:33.951 00:33:33.951 Latency(us) 00:33:33.951 [2024-11-20T05:44:53.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.951 [2024-11-20T05:44:53.871Z] =================================================================================================================== 00:33:33.951 [2024-11-20T05:44:53.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2891081 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2891763 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2891763 /var/tmp/bperf.sock 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2891763 ']' 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:33.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:33.951 06:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:34.211 [2024-11-20 06:44:53.890754] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:34.211 [2024-11-20 06:44:53.890810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891763 ] 00:33:34.211 [2024-11-20 06:44:53.974868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.211 [2024-11-20 06:44:54.004576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.782 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:34.782 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:33:34.782 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:34.782 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:35.043 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:35.043 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.043 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.043 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.043 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.043 06:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.303 nvme0n1 00:33:35.303 06:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:35.303 06:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.303 06:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:35.303 06:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.303 06:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:35.303 06:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:35.563 Running I/O for 2 seconds... 00:33:35.563 [2024-11-20 06:44:55.259084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee5658 00:33:35.563 [2024-11-20 06:44:55.260182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.563 [2024-11-20 06:44:55.260207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:35.563 [2024-11-20 06:44:55.267710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee49b0 00:33:35.563 [2024-11-20 06:44:55.268796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.563 [2024-11-20 06:44:55.268814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:35.563 [2024-11-20 06:44:55.275015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee2c28 00:33:35.563 [2024-11-20 06:44:55.275751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.563 [2024-11-20 06:44:55.275767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.563 [2024-11-20 06:44:55.283563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef2510 00:33:35.563 [2024-11-20 06:44:55.284267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.563 [2024-11-20 06:44:55.284283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.563 [2024-11-20 06:44:55.291981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eed0b0 00:33:35.563 [2024-11-20 06:44:55.292697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.563 [2024-11-20 06:44:55.292713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:35.563 [2024-11-20 06:44:55.301547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edf550 00:33:35.563 [2024-11-20 06:44:55.302620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.563 [2024-11-20 06:44:55.302636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.563 [2024-11-20 06:44:55.310242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef31b8 00:33:35.563 [2024-11-20 06:44:55.311052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.311068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.318769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:35.564 [2024-11-20 06:44:55.319605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.319622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.327666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee8088 00:33:35.564 [2024-11-20 06:44:55.328733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.328752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.336097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eeaab8 00:33:35.564 [2024-11-20 06:44:55.337127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.337143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.344572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eed0b0 00:33:35.564 [2024-11-20 06:44:55.345631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.345647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.353043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eef270 00:33:35.564 [2024-11-20 06:44:55.354062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.354077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.361503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef31b8 00:33:35.564 [2024-11-20 06:44:55.362566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.362583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.369984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee5ec8 00:33:35.564 [2024-11-20 06:44:55.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.371035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.378456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee3d08 00:33:35.564 [2024-11-20 06:44:55.379516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.379532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.386910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:35.564 [2024-11-20 06:44:55.387974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.387989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.395369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee6738 00:33:35.564 [2024-11-20 06:44:55.396425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.396444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.403832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee88f8 00:33:35.564 [2024-11-20 06:44:55.404879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.404895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.412467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eec840 00:33:35.564 [2024-11-20 06:44:55.413546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.413562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.420947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eeea00 00:33:35.564 [2024-11-20 06:44:55.422009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.422025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.429516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef2948 00:33:35.564 [2024-11-20 06:44:55.430587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.430603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.438008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edf550 00:33:35.564 [2024-11-20 06:44:55.439028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.439044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.446471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee4578 00:33:35.564 [2024-11-20 06:44:55.447543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.447558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.454961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0ea0 00:33:35.564 [2024-11-20 06:44:55.455986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.456001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.463435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edfdc0 00:33:35.564 [2024-11-20 06:44:55.464483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.464499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.564 [2024-11-20 06:44:55.471925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee8088 00:33:35.564 [2024-11-20 06:44:55.472962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.564 [2024-11-20 06:44:55.472978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.825 [2024-11-20 06:44:55.480398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eeaab8 00:33:35.825 [2024-11-20 06:44:55.481469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.825 [2024-11-20 06:44:55.481485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.825 [2024-11-20 06:44:55.488903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eed0b0 00:33:35.825 [2024-11-20 06:44:55.489949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.825 [2024-11-20 06:44:55.489965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.825 [2024-11-20 06:44:55.497370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eef270 00:33:35.825 [2024-11-20 06:44:55.498419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.825 [2024-11-20 06:44:55.498435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.825 [2024-11-20 06:44:55.506940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef31b8 00:33:35.825 [2024-11-20 06:44:55.508469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.825 [2024-11-20 06:44:55.508485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:35.825 [2024-11-20 06:44:55.512960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee3d08 00:33:35.825 [2024-11-20 06:44:55.513679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.825 [2024-11-20 06:44:55.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:35.825 [2024-11-20 06:44:55.521703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:35.825 [2024-11-20 06:44:55.522336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.825 [2024-11-20 06:44:55.522351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.825 [2024-11-20 06:44:55.530193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef92c0 00:33:35.825 [2024-11-20 06:44:55.530710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.825 [2024-11-20 06:44:55.530726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.538676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee23b8 00:33:35.826 [2024-11-20 06:44:55.539328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.539344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.547144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:35.826 [2024-11-20 06:44:55.547812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.547828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.555618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef92c0 00:33:35.826 [2024-11-20 06:44:55.556297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.556314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.564097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee23b8 00:33:35.826 [2024-11-20 06:44:55.564763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.564778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.572559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:35.826 [2024-11-20 06:44:55.573231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.573247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.581030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef92c0 00:33:35.826 [2024-11-20 06:44:55.581698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.581714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.589481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee23b8 00:33:35.826 [2024-11-20 06:44:55.590152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.590168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.597921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:35.826 [2024-11-20 06:44:55.598585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.598601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.606424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef92c0 00:33:35.826 [2024-11-20 06:44:55.607061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.614921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee23b8 00:33:35.826 [2024-11-20 06:44:55.615570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.615588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.623804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eef270 00:33:35.826 [2024-11-20 06:44:55.624498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.624514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.632458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee95a0 00:33:35.826 [2024-11-20 06:44:55.633370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.633385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.640932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eea680 00:33:35.826 [2024-11-20 06:44:55.641826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.641842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.649402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef8e88 00:33:35.826 [2024-11-20 06:44:55.650337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.650353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.657875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef0ff8 00:33:35.826 [2024-11-20 06:44:55.658750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.658766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.666325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eebfd0 00:33:35.826 [2024-11-20 06:44:55.667248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.667264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.674785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4f40 00:33:35.826 [2024-11-20 06:44:55.675708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.675724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.683234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee6fa8 00:33:35.826 [2024-11-20 06:44:55.684166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.684182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.691684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee6300 00:33:35.826 [2024-11-20 06:44:55.692603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.692619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.700173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef3e60 00:33:35.826 [2024-11-20 06:44:55.701094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.701110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.708645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef2d80 00:33:35.826 [2024-11-20 06:44:55.709556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.709573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.717103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee12d8 00:33:35.826 [2024-11-20 06:44:55.718020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.718036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.725552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eeee38 00:33:35.826 [2024-11-20 06:44:55.726486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.726502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:35.826 [2024-11-20 06:44:55.733996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:35.826 [2024-11-20 06:44:55.734910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:35.826 [2024-11-20 06:44:55.734925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.087 [2024-11-20 06:44:55.742500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eecc78 00:33:36.087 [2024-11-20 06:44:55.743437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.087 [2024-11-20 06:44:55.743454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.087 [2024-11-20 06:44:55.750973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:36.087 [2024-11-20 06:44:55.751877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.087 [2024-11-20 06:44:55.751893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.087 [2024-11-20 06:44:55.759429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:36.087 [2024-11-20 06:44:55.760342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.760358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.767905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:36.088 [2024-11-20 06:44:55.768817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.768833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.776445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eea248 00:33:36.088 [2024-11-20 06:44:55.777367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.777383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.784906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef8a50 00:33:36.088 [2024-11-20 06:44:55.785822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.793385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef1430 00:33:36.088 [2024-11-20 06:44:55.794311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.794327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.801860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eebb98 00:33:36.088 [2024-11-20 06:44:55.802775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.802790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.810323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4b08 00:33:36.088 [2024-11-20 06:44:55.811256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.811272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.819895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef5be8 00:33:36.088 [2024-11-20 06:44:55.821233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.821248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.827387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee3060 00:33:36.088 [2024-11-20 06:44:55.828078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.828094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.835879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee4de8 00:33:36.088 [2024-11-20 06:44:55.836564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.836583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.844377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef96f8 00:33:36.088 [2024-11-20 06:44:55.845062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.845078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.852840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee3060 00:33:36.088 [2024-11-20 06:44:55.853509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.853525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.861312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee4de8 00:33:36.088 [2024-11-20 06:44:55.862020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.862035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.870165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef96f8 00:33:36.088 [2024-11-20 06:44:55.871215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.871230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.878552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efa7d8 00:33:36.088 [2024-11-20 06:44:55.879582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.879598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.887035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb8b8 00:33:36.088 [2024-11-20 06:44:55.888025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.888041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.895500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edece0 00:33:36.088 [2024-11-20 06:44:55.896552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.896568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.903968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0ea0 00:33:36.088 [2024-11-20 06:44:55.905009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.905024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.912418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6890 00:33:36.088 [2024-11-20 06:44:55.913459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.913478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.920894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ede470 00:33:36.088 [2024-11-20 06:44:55.921933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.921948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.929386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee99d8 00:33:36.088 [2024-11-20 06:44:55.930426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.930442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.937888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1f80 00:33:36.088 [2024-11-20 06:44:55.938924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.938939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.946351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee3060 00:33:36.088 [2024-11-20 06:44:55.947389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.947405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.954821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee7c50 00:33:36.088 [2024-11-20 06:44:55.955840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.955856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.963272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee8d30 00:33:36.088 [2024-11-20 06:44:55.964265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.964280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.971761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee84c0 00:33:36.088 [2024-11-20 06:44:55.972795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.972810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.980237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efef90 00:33:36.088 [2024-11-20 06:44:55.981303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.088 [2024-11-20 06:44:55.981319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.088 [2024-11-20 06:44:55.988705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efdeb0 00:33:36.089 [2024-11-20 06:44:55.989747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.089 [2024-11-20 06:44:55.989762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.089 [2024-11-20 06:44:55.997195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7970 00:33:36.089 [2024-11-20 06:44:55.998245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.089 [2024-11-20 06:44:55.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.005653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef92c0 00:33:36.350 [2024-11-20 06:44:56.006706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.006722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.014124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efa3a0 00:33:36.350 [2024-11-20 06:44:56.015180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.015195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.022610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb480 00:33:36.350 [2024-11-20 06:44:56.023657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.023674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.031113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edf118 00:33:36.350 [2024-11-20 06:44:56.032157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.032173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.039597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6458 00:33:36.350 [2024-11-20 06:44:56.040645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.040661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.048084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edf988 00:33:36.350 [2024-11-20 06:44:56.049108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.049124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.056537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee73e0 00:33:36.350 [2024-11-20 06:44:56.057574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.057590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.065024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ede038 00:33:36.350 [2024-11-20 06:44:56.066053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.066070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.073517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1b48 00:33:36.350 [2024-11-20 06:44:56.074551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.074567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.081988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee2c28 00:33:36.350 [2024-11-20 06:44:56.083028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.083044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.090459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc560 00:33:36.350 [2024-11-20 06:44:56.091507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.091523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.098933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee88f8 00:33:36.350 [2024-11-20 06:44:56.099981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.099997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.107382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eeb328 00:33:36.350 [2024-11-20 06:44:56.108432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.108448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.115877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edfdc0 00:33:36.350 [2024-11-20 06:44:56.116900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.116916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.124348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe720 00:33:36.350 [2024-11-20 06:44:56.125379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.350 [2024-11-20 06:44:56.125395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.350 [2024-11-20 06:44:56.132837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efeb58 00:33:36.350 [2024-11-20 06:44:56.133891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.133909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.141316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7da8 00:33:36.351 [2024-11-20 06:44:56.142351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.142367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.149757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef96f8 00:33:36.351 [2024-11-20 06:44:56.150764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.150780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.158275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efa7d8 00:33:36.351 [2024-11-20 06:44:56.159315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.159331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.166761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb8b8 00:33:36.351 [2024-11-20 06:44:56.167785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.167801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.175236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016edece0 00:33:36.351 [2024-11-20 06:44:56.176290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.176305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.183714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0ea0 00:33:36.351 [2024-11-20 06:44:56.184751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.184766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.192169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6890 00:33:36.351 [2024-11-20 06:44:56.193221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.193237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.200624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ede470 00:33:36.351 [2024-11-20 06:44:56.201670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.201685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.209128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee99d8 00:33:36.351 [2024-11-20 06:44:56.210179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.210194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.217602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1f80 00:33:36.351 [2024-11-20 06:44:56.218646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.218663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.226113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee3060 00:33:36.351 [2024-11-20 06:44:56.227160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.227175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.234578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee7c50 00:33:36.351 [2024-11-20 06:44:56.235635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.235651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.243045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee8d30 00:33:36.351 [2024-11-20 06:44:56.244077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.244093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 29861.00 IOPS, 116.64 MiB/s [2024-11-20T05:44:56.271Z] [2024-11-20 06:44:56.251503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc998 00:33:36.351 [2024-11-20 06:44:56.252531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.252546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.351 [2024-11-20 06:44:56.259986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efda78 00:33:36.351 [2024-11-20 06:44:56.261015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.351 [2024-11-20 06:44:56.261031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.268454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6020 00:33:36.612 [2024-11-20 06:44:56.269477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.269492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.276934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb048 00:33:36.612 [2024-11-20 06:44:56.277931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.277947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.285395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0630 00:33:36.612 [2024-11-20 06:44:56.286427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.286443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.293859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:36.612 [2024-11-20 06:44:56.294877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.294893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.302337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:36.612 [2024-11-20 06:44:56.303371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.303387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.310804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:36.612 [2024-11-20 06:44:56.311825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.311841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.319261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:36.612 [2024-11-20 06:44:56.320302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.320318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.327727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe2e8 00:33:36.612 [2024-11-20 06:44:56.328765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.328780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.336193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efd208 00:33:36.612 [2024-11-20 06:44:56.337229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.337245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.344703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef9f68 00:33:36.612 [2024-11-20 06:44:56.345737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.345756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.353238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4298 00:33:36.612 [2024-11-20 06:44:56.354218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.354236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.361706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:36.612 [2024-11-20 06:44:56.362716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.362731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.370202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eddc00 00:33:36.612 [2024-11-20 06:44:56.371236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.371252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.612 [2024-11-20 06:44:56.378643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:36.612 [2024-11-20 06:44:56.379640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.612 [2024-11-20 06:44:56.379656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.387128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eecc78 00:33:36.613 [2024-11-20 06:44:56.388165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.388181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.395619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc998 00:33:36.613 [2024-11-20 06:44:56.396640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.396656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.404099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efda78 00:33:36.613 [2024-11-20 06:44:56.405186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.405202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.412741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6020 00:33:36.613 [2024-11-20 06:44:56.413755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.413771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.421213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb048 00:33:36.613 [2024-11-20 06:44:56.422244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.422260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.429661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0630 00:33:36.613 [2024-11-20 06:44:56.430705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.430721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.438257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:36.613 [2024-11-20 06:44:56.439295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.439311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.446739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:36.613 [2024-11-20 06:44:56.447755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.447772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.455218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:36.613 [2024-11-20 06:44:56.456234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.456250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.463687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:36.613 [2024-11-20 06:44:56.464726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.464741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.472148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe2e8 00:33:36.613 [2024-11-20 06:44:56.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.473196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.480610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efd208 00:33:36.613 [2024-11-20 06:44:56.481637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.481653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.489153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef9f68 00:33:36.613 [2024-11-20 06:44:56.490163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.490178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.497629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4298 00:33:36.613 [2024-11-20 06:44:56.498643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.498658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.506135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:36.613 [2024-11-20 06:44:56.507133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.507149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.514585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eddc00 00:33:36.613 [2024-11-20 06:44:56.515621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.515637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.613 [2024-11-20 06:44:56.523048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:36.613 [2024-11-20 06:44:56.524065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.613 [2024-11-20 06:44:56.524080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.531530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eecc78 00:33:36.875 [2024-11-20 06:44:56.532558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.532574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.540020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc998 00:33:36.875 [2024-11-20 06:44:56.541014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.541030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.548479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efda78 00:33:36.875 [2024-11-20 06:44:56.549512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.549528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.556957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6020 00:33:36.875 [2024-11-20 06:44:56.557967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.557983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.565389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb048 00:33:36.875 [2024-11-20 06:44:56.566422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.566437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.573856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0630 00:33:36.875 [2024-11-20 06:44:56.574866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.574885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.582328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:36.875 [2024-11-20 06:44:56.583315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.583330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.590798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:36.875 [2024-11-20 06:44:56.591810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.591827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.599251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:36.875 [2024-11-20 06:44:56.600266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.600281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.607698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:36.875 [2024-11-20 06:44:56.608675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.608692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.616137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe2e8 00:33:36.875 [2024-11-20 06:44:56.617159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.875 [2024-11-20 06:44:56.617175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.875 [2024-11-20 06:44:56.624621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efd208 00:33:36.876 [2024-11-20 06:44:56.625646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.625661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.633080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef9f68 00:33:36.876 [2024-11-20 06:44:56.634113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.634128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.641549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4298 00:33:36.876 [2024-11-20 06:44:56.642562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.642578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.650007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:36.876 [2024-11-20 06:44:56.651038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.651054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.658460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eddc00 00:33:36.876 [2024-11-20 06:44:56.659493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.659509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.666944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:36.876 [2024-11-20 06:44:56.667976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.667992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.675413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eecc78 00:33:36.876 [2024-11-20 06:44:56.676446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.676461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.683874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc998 00:33:36.876 [2024-11-20 06:44:56.684905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.684921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.692327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efda78 00:33:36.876 [2024-11-20 06:44:56.693301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.693317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.700768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6020 00:33:36.876 [2024-11-20 06:44:56.701777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.701793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.709210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb048 00:33:36.876 [2024-11-20 06:44:56.710224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.710240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.717685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0630 00:33:36.876 [2024-11-20 06:44:56.718715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.718731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.726143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:36.876 [2024-11-20 06:44:56.727142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.727157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.734634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:36.876 [2024-11-20 06:44:56.735660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.735677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.743112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:36.876 [2024-11-20 06:44:56.744152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.744168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.751559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:36.876 [2024-11-20 06:44:56.752576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.752591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.760055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe2e8 00:33:36.876 [2024-11-20 06:44:56.761091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.761107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.768526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efd208 00:33:36.876 [2024-11-20 06:44:56.769565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.769580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.777033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef9f68 00:33:36.876 [2024-11-20 06:44:56.778040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.778056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:36.876 [2024-11-20 06:44:56.785483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4298 00:33:36.876 [2024-11-20 06:44:56.786513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:36.876 [2024-11-20 06:44:56.786529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.793939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:37.138 [2024-11-20 06:44:56.794958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.794974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.802406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eddc00 00:33:37.138 [2024-11-20 06:44:56.803435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.803450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.810886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:37.138 [2024-11-20 06:44:56.811886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.811901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.819328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eecc78 00:33:37.138 [2024-11-20 06:44:56.820349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.820365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.827814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc998 00:33:37.138 [2024-11-20 06:44:56.828833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.828848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.836289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efda78 00:33:37.138 [2024-11-20 06:44:56.837321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.837336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.844738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6020 00:33:37.138 [2024-11-20 06:44:56.845773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.845788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.853205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb048 00:33:37.138 [2024-11-20 06:44:56.854217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.854232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.861665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0630 00:33:37.138 [2024-11-20 06:44:56.862690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.862706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.138 [2024-11-20 06:44:56.870139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:37.138 [2024-11-20 06:44:56.871155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.138 [2024-11-20 06:44:56.871176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.878604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:37.139 [2024-11-20 06:44:56.879614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.879629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.887045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:37.139 [2024-11-20 06:44:56.888071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.888087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.895502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:37.139 [2024-11-20 06:44:56.896520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.896536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.903981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe2e8 00:33:37.139 [2024-11-20 06:44:56.904986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.905002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.912440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efd208 00:33:37.139 [2024-11-20 06:44:56.913468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.913484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.920925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef9f68 00:33:37.139 [2024-11-20 06:44:56.921956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.921971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.929375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4298 00:33:37.139 [2024-11-20 06:44:56.930389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.930404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.937836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:37.139 [2024-11-20 06:44:56.938822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.938837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.946310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eddc00 00:33:37.139 [2024-11-20 06:44:56.947332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.947348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.954783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:37.139 [2024-11-20 06:44:56.955800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.955817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.963267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eecc78 00:33:37.139 [2024-11-20 06:44:56.964286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.964302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.971734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc998 00:33:37.139 [2024-11-20 06:44:56.972759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.972774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.980174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efda78 00:33:37.139 [2024-11-20 06:44:56.981207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.981223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.988642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6020 00:33:37.139 [2024-11-20 06:44:56.989655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.989671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:56.997124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb048 00:33:37.139 [2024-11-20 06:44:56.998138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:56.998154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:57.005597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0630 00:33:37.139 [2024-11-20 06:44:57.006631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:57.006646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:57.014068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:37.139 [2024-11-20 06:44:57.015080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:57.015096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:57.022517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:37.139 [2024-11-20 06:44:57.023548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:57.023564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:57.030967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:37.139 [2024-11-20 06:44:57.031959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:57.031974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:57.039437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:37.139 [2024-11-20 06:44:57.040457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.139 [2024-11-20 06:44:57.040473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.139 [2024-11-20 06:44:57.047919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe2e8 00:33:37.140 [2024-11-20 06:44:57.048958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.140 [2024-11-20 06:44:57.048974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.056418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efd208 00:33:37.400 [2024-11-20 06:44:57.057451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.064901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef9f68 00:33:37.400 [2024-11-20 06:44:57.065941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.065957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.073340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4298 00:33:37.400 [2024-11-20 06:44:57.074355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.074371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.081810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:37.400 [2024-11-20 06:44:57.082822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.082838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.090272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eddc00 00:33:37.400 [2024-11-20 06:44:57.091262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.091280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.098729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:37.400 [2024-11-20 06:44:57.099743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.099761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.107191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eecc78 00:33:37.400 [2024-11-20 06:44:57.108210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.108225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.115665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc998 00:33:37.400 [2024-11-20 06:44:57.116696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.116712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.124104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efda78 00:33:37.400 [2024-11-20 06:44:57.125125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.125141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.132592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef6020 00:33:37.400 [2024-11-20 06:44:57.133622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.133638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.141065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efb048 00:33:37.400 [2024-11-20 06:44:57.142082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.142098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.149523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee0630 00:33:37.400 [2024-11-20 06:44:57.150538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.150553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.157987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efcdd0 00:33:37.400 [2024-11-20 06:44:57.159011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.159027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.166431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee1710 00:33:37.400 [2024-11-20 06:44:57.167425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.167441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.174904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efc128 00:33:37.400 [2024-11-20 06:44:57.175936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.175951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.183391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eedd58 00:33:37.400 [2024-11-20 06:44:57.184411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.184426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.191870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efe2e8 00:33:37.400 [2024-11-20 06:44:57.192879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.192895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.200357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016efd208 00:33:37.400 [2024-11-20 06:44:57.201366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.201382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.208818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef9f68 00:33:37.400 [2024-11-20 06:44:57.209818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.209834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.217269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef4298 00:33:37.400 [2024-11-20 06:44:57.218290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.218306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.225758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ef7100 00:33:37.400 [2024-11-20 06:44:57.226786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.226801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.234255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016eddc00 00:33:37.400 [2024-11-20 06:44:57.235264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.235280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 [2024-11-20 06:44:57.242715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0750) with pdu=0x200016ee27f0 00:33:37.400 [2024-11-20 06:44:57.243730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.400 [2024-11-20 06:44:57.243748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.400 30008.00 IOPS, 117.22 MiB/s 00:33:37.400 Latency(us) 00:33:37.400 [2024-11-20T05:44:57.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.400 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:37.400 nvme0n1 : 2.00 30021.28 117.27 0.00 0.00 4258.50 2184.53 13926.40 00:33:37.400 [2024-11-20T05:44:57.320Z] =================================================================================================================== 00:33:37.400 [2024-11-20T05:44:57.320Z] Total : 30021.28 117.27 0.00 0.00 4258.50 2184.53 13926.40 00:33:37.400 { 00:33:37.400 "results": [ 00:33:37.400 { 00:33:37.400 "job": "nvme0n1", 00:33:37.400 "core_mask": "0x2", 00:33:37.400 "workload": "randwrite", 00:33:37.400 "status": "finished", 00:33:37.400 "queue_depth": 128, 00:33:37.400 "io_size": 4096, 00:33:37.400 "runtime": 2.003379, 00:33:37.400 "iops": 30021.279049046636, 00:33:37.400 "mibps": 117.27062128533842, 00:33:37.400 "io_failed": 0, 00:33:37.400 "io_timeout": 0, 00:33:37.400 "avg_latency_us": 4258.496436995655, 00:33:37.400 "min_latency_us": 2184.5333333333333, 00:33:37.400 "max_latency_us": 13926.4 00:33:37.400 } 00:33:37.400 ], 00:33:37.400 "core_count": 1 00:33:37.400 } 00:33:37.400 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:37.400 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:37.400 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:37.400 | .driver_specific 00:33:37.400 | .nvme_error 00:33:37.400 | .status_code 00:33:37.400 | .command_transient_transport_error' 00:33:37.400 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2891763 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2891763 ']' 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2891763 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2891763 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2891763' 00:33:37.660 killing process with pid 2891763 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2891763 00:33:37.660 Received shutdown signal, test time was about 2.000000 seconds 00:33:37.660 00:33:37.660 Latency(us) 00:33:37.660 [2024-11-20T05:44:57.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.660 [2024-11-20T05:44:57.580Z] =================================================================================================================== 00:33:37.660 [2024-11-20T05:44:57.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:37.660 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2891763 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2892472 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2892472 /var/tmp/bperf.sock 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2892472 ']' 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:37.920 06:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.920 [2024-11-20 06:44:57.690685] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:37.921 [2024-11-20 06:44:57.690762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892472 ] 00:33:37.921 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.921 Zero copy mechanism will not be used. 00:33:37.921 [2024-11-20 06:44:57.776866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.921 [2024-11-20 06:44:57.805406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.862 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.122 nvme0n1 00:33:39.122 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:39.122 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.122 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:39.122 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.122 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:39.122 06:44:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:39.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.122 Zero copy mechanism will not be used. 00:33:39.122 Running I/O for 2 seconds... 00:33:39.122 [2024-11-20 06:44:59.020848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.122 [2024-11-20 06:44:59.021031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.122 [2024-11-20 06:44:59.021057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.122 [2024-11-20 06:44:59.029119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.122 [2024-11-20 06:44:59.029232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.122 [2024-11-20 06:44:59.029251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.123 [2024-11-20 06:44:59.034987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.123 [2024-11-20 06:44:59.035051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.123 [2024-11-20 06:44:59.035073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.123 [2024-11-20 06:44:59.038700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.123 [2024-11-20 06:44:59.038764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.123 [2024-11-20 06:44:59.038783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.042322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.042371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.042392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.045998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.046062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.046084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.049628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.049686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.049716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.053193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.053249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.053268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.056757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.056797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.056813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.060267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.060315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.060333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.063910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.063973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.063995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.067457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.067496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.067514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.070855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.070909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.070928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.074385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.074429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.074459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.077861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.077983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.082443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.082520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.082535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.090211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.090318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.090333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.095148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.095274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.095289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.099844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.099960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.099976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.105178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.105302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.105318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.110002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.110122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.110138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.114186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.114299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.114315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.118379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.118502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.118517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.122039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.122093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.122111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.125673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.125732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.125756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.387 [2024-11-20 06:44:59.129227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.387 [2024-11-20 06:44:59.129287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.387 [2024-11-20 06:44:59.129306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.133064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.133116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.133135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.136520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.136575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.136594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.140000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.140058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.140077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.143451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.143495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.143520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.146764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.146806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.146829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.150250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.150297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.150321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.153716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.153759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.153783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.157015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.157057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.157088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.160204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.160272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.163332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.163372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.163391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.166409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.166465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.166480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.169819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.169893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.169911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.173750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.173824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.173839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.178947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.178999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.179028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.182463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.182504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.182522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.185885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.185936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.185956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.189301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.189343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.189364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.192643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.192688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.192710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.196141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.196188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.196212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.199508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.199564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.199584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.204153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.204227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.204242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.208839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.208898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.208917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.212070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.212116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.212139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.215441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.215495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.215513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.218726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.218782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.218800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.222030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.222076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.222097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.225340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.225385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.225407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.228705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.228753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.228769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.232032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.232077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.232096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.235362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.235414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.235434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.238843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.238885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.238907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.242160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.242210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.242231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.245341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.245387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.245410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.248393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.388 [2024-11-20 06:44:59.248431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.388 [2024-11-20 06:44:59.248456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.388 [2024-11-20 06:44:59.251750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.251825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.254926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.254971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.254989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.257974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.258033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.258051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.261048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.261100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.261123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.264106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.264158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.264178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.267165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.267210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.267228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.270189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.270230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.270253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.273373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.273464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.273480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.279159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.279201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.279220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.282637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.282677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.282695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.286136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.286193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.286212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.289485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.289532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.289550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.292823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.292887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.292904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.296186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.296235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.296254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.389 [2024-11-20 06:44:59.299503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.389 [2024-11-20 06:44:59.299562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.389 [2024-11-20 06:44:59.299588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.652 [2024-11-20 06:44:59.304352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.652 [2024-11-20 06:44:59.304461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.652 [2024-11-20 06:44:59.304477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.652 [2024-11-20 06:44:59.308501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.652 [2024-11-20 06:44:59.308556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.652 [2024-11-20 06:44:59.308581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.652 [2024-11-20 06:44:59.311992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.652 [2024-11-20 06:44:59.312036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.652 [2024-11-20 06:44:59.312056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.652 [2024-11-20 06:44:59.315350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.652 [2024-11-20 06:44:59.315574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.652 [2024-11-20 06:44:59.315590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.652 [2024-11-20 06:44:59.321228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.652 [2024-11-20 06:44:59.321293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.652 [2024-11-20 06:44:59.321315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.652 [2024-11-20 06:44:59.324584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.652 [2024-11-20 06:44:59.324632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.324650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.327947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.327994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.328014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.331394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.331452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.331478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.335880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.335972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.335988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.340549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.340611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.340634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.343891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.343945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.343964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.347429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.347488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.347508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.350847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.350891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.350909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.354264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.354303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.354327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.357714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.357764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.357780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.361227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.361270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.361291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.364866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.364921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.369123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.369170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.369185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.373152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.373241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.373256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.377969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.378018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.378036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.381689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.381755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.381774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.385266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.385328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.385347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.389035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.389090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.389108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.392766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.392811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.392829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.396525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.396583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.396601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.400653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.400708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.400724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.406085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.406188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.406212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.409685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.653 [2024-11-20 06:44:59.409732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.653 [2024-11-20 06:44:59.409754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.653 [2024-11-20 06:44:59.413205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.413261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.413279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.416700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.416760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.416779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.420209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.420262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.420284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.423682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.423737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.423761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.427043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.427090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.427108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.430618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.430673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.430692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.434186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.434236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.434257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.437780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.437832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.437849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.441221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.441267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.441282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.444682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.444729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.444761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.448090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.448139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.448159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.451479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.451547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.451566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.456754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.456859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.456874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.460495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.460556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.460579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.463853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.463916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.463937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.467260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.467314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.467333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.470593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.470647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.470675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.473989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.474045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.474070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.477354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.477397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.477416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.480699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.480757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.480780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.484259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.484311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.484332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.487734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.487786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.487806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.491113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.491153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.491172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.494433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.494476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.494498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.497758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.654 [2024-11-20 06:44:59.497807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.654 [2024-11-20 06:44:59.497830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.654 [2024-11-20 06:44:59.501085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.501132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.501152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.504398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.504437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.504456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.507777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.507818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.507840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.511227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.511269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.511292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.514633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.514678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.514699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.518408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.518448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.518470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.521920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.521971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.521991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.525374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.525415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.525435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.528824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.528884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.528905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.533497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.533559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.533574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.537334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.537375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.537393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.543285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.543468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.543483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.551856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.552034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.552049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.556903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.556953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.556968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.655 [2024-11-20 06:44:59.561095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.655 [2024-11-20 06:44:59.561216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.655 [2024-11-20 06:44:59.561231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.568091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.568156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.568177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.572155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.572218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.572237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.575891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.575938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.575962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.579686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.579728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.579753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.583339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.583394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.583413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.587046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.587091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.587109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.590770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.590817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.590837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.594288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.594345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.594366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.597800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.597854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.919 [2024-11-20 06:44:59.597873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.919 [2024-11-20 06:44:59.601337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.919 [2024-11-20 06:44:59.601393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.601411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.604860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.604914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.604933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.608330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.608377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.608397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.611830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.611880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.611902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.615288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.615344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.615362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.618647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.618697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.618716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.621951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.621993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.622012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.625336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.625382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.625400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.628768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.628820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.628842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.632172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.632215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.632234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.635562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.635637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.635653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.639790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.639837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.639861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.643901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.643945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.643964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.649290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.649353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.649368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.653941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.653998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.654017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.657641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.657689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.657709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.661005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.661046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.661063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.664841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.664907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.664931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.669105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.669170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.669189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.673075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.673140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.673162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.677489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.677545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.677567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.681550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.681614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.920 [2024-11-20 06:44:59.681631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.920 [2024-11-20 06:44:59.685754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.920 [2024-11-20 06:44:59.685815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.685830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.689717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.689766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.689794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.693956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.694009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.694027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.698142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.698210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.698225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.701984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.702032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.702052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.705581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.705635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.705656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.709415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.709485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.713290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.713344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.713365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.717179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.717228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.717247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.721167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.721214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.721236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.724951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.725001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.725021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.728891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.728949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.728965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.732839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.732899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.732921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.736483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.736537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.736555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.740430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.740478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.744077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.744122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.744141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.747506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.747551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.747572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.751058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.751098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.751122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.754517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.754565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.754584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.757844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.757889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.757908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.761156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.761201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.761216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.764884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.764932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.764953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.768217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.768263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.768285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.771525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.771572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.771594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.774705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.921 [2024-11-20 06:44:59.774757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.921 [2024-11-20 06:44:59.774776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.921 [2024-11-20 06:44:59.778178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.778226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.778245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.783457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.783538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.783553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.787893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.787953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.787973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.791034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.791078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.791093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.794514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.794566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.794583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.798006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.798063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.798081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.801508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.801574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.801592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.804997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.805047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.805064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.808384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.808446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.808467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.811779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.811823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.811840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.815075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.815129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.815148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.818466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.818506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.818522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.821765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.821821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.821838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.825134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.825178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.825196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:39.922 [2024-11-20 06:44:59.828724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:39.922 [2024-11-20 06:44:59.828869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.922 [2024-11-20 06:44:59.828884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.185 [2024-11-20 06:44:59.834213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.185 [2024-11-20 06:44:59.834267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.185 [2024-11-20 06:44:59.834285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.185 [2024-11-20 06:44:59.837616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.185 [2024-11-20 06:44:59.837671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.185 [2024-11-20 06:44:59.837694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.185 [2024-11-20 06:44:59.841003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.185 [2024-11-20 06:44:59.841058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.185 [2024-11-20 06:44:59.841076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.185 [2024-11-20 06:44:59.844599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.185 [2024-11-20 06:44:59.844653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.185 [2024-11-20 06:44:59.844678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.185 [2024-11-20 06:44:59.848436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.185 [2024-11-20 06:44:59.848491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.185 [2024-11-20 06:44:59.848506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.185 [2024-11-20 06:44:59.852189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.185 [2024-11-20 06:44:59.852229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.852248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.855777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.855821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.855839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.859499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.859546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.859565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.863516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.863575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.863593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.867494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.867548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.867572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.871596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.871661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.871679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.875798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.875848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.875870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.879553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.879618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.879636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.883352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.883401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.883416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.887018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.887063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.887081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.890751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.890798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.890816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.894236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.894291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.894309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.897825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.897872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.897896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.901817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.901866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.901885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.905522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.905584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.905602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.909176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.909225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.909247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.913309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.913363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.917135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.917191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.917209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.920922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.920970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.920990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.924517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.924566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.924586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.928089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.928137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.928158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.931978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.932026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.932047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.935604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.935661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.935679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.939015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.939067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.939085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.942696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.942735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.942755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.946455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.946495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.946513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.950383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.186 [2024-11-20 06:44:59.950429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.186 [2024-11-20 06:44:59.950447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.186 [2024-11-20 06:44:59.954196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.954256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.954275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.958004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.958065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.958083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.961782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.961832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.961851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.965348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.965403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.965427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.968884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.968934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.968952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.972372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.972420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.972441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.975952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.976002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.976023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.979424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.979477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.979497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.982962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.983008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.983023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.986756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.986793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.986809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.990833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.990908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.990923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.994474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.994518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.994540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:44:59.998303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:44:59.998350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:44:59.998372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.002466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.002514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.002534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.006713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.006775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.006796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.010330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.010381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.010401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.013977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.014028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.014046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.017593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.017642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.017664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.021134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.022352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.022370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.187 8184.00 IOPS, 1023.00 MiB/s [2024-11-20T05:45:00.107Z] [2024-11-20 06:45:00.025215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.025263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.025281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.030936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.031088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.031105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.035252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.035428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.035444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.038433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.038514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.038532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.041426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.041545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.041560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.045669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.045835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.045855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.048400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.048566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.048581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.051239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.187 [2024-11-20 06:45:00.051411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.187 [2024-11-20 06:45:00.051427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.187 [2024-11-20 06:45:00.054287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.054438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.054454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.057153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.057301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.057320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.059879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.060028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.060047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.062910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.063069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.063085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.065705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.065885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.065903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.068427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.068610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.068626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.071454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.071568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.071588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.074099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.074255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.074272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.077145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.077352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.077367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.080673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.080871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.080886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.087342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.087456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.087472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.188 [2024-11-20 06:45:00.094188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.188 [2024-11-20 06:45:00.094306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.188 [2024-11-20 06:45:00.094324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.101576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.101662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.101677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.109462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.109630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.109646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.116385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.116477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.116492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.123429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.123646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.123661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.130538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.130720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.130735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.137425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.137570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.137585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.144752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.144842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.144857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.151641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.151731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.151750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.158587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.158703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.158718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.165931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.166009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.166025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.172832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.172940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.172955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.176725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.176888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.176903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.180172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.180273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.180288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.183729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.183846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.450 [2024-11-20 06:45:00.183861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.450 [2024-11-20 06:45:00.187679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.450 [2024-11-20 06:45:00.187858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.187876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.191114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.191212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.191227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.194521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.194657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.194673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.197864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.197973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.197998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.201237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.201343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.201358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.204261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.204373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.204389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.206929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.207060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.207081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.209576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.209684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.209703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.212245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.212338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.212359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.214900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.215018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.215037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.219236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.219327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.219343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.224725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.224799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.224816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.230043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.230174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.230189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.234891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.234968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.234983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.239263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.239378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.239393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.242494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.242646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.242666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.246256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.246350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.246365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.251653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.251767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.251782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.256042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.256103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.256117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.260766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.260904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.260919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.266324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.266419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.266435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.269488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.269607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.269629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.272444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.272531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.272546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.275575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.275663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.275683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.278832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.278949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.278967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.282497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.282619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.282638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.286605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.286648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.451 [2024-11-20 06:45:00.286664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.451 [2024-11-20 06:45:00.291820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.451 [2024-11-20 06:45:00.291938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.291956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.295136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.295183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.295199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.299944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.300026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.300042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.304575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.304670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.304687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.308094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.308175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.308190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.312388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.312548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.312563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.315789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.315905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.315923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.318862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.318979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.318997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.321793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.321895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.321910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.324767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.324852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.324871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.327614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.327715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.327733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.330533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.330580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.333178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.333307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.333322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.335819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.335939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.335955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.338450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.338557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.338576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.341090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.341185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.341205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.343673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.343791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.343811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.346271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.346413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.346432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.348885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.348987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.349004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.351515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.351623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.351642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.354085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.354210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.354229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.356672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.356783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.356802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.359231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.359351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.359369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.452 [2024-11-20 06:45:00.362238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.452 [2024-11-20 06:45:00.362366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.452 [2024-11-20 06:45:00.362381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.714 [2024-11-20 06:45:00.365444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.714 [2024-11-20 06:45:00.365564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.714 [2024-11-20 06:45:00.365593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.714 [2024-11-20 06:45:00.368017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.714 [2024-11-20 06:45:00.368123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.714 [2024-11-20 06:45:00.368138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.714 [2024-11-20 06:45:00.370556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.714 [2024-11-20 06:45:00.370684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.714 [2024-11-20 06:45:00.370699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.714 [2024-11-20 06:45:00.373361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.714 [2024-11-20 06:45:00.373499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.714 [2024-11-20 06:45:00.373515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.714 [2024-11-20 06:45:00.377763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.714 [2024-11-20 06:45:00.377931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.714 [2024-11-20 06:45:00.377946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.714 [2024-11-20 06:45:00.384209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.714 [2024-11-20 06:45:00.384306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.384321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.390480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.390559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.390574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.396683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.396804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.396820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.403853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.403985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.404001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.408643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.408770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.408786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.411496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.411611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.411626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.414259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.414385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.414404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.417182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.417295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.417318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.420181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.420281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.420296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.424205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.424361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.424376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.427654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.427776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.427794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.430647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.430829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.430844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.434868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.434960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.434975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.438239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.438337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.438356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.440966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.441084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.441100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.443575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.443703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.443722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.446176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.446332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.446348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.448808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.448918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.448933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.451422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.451569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.451584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.454291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.454461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.454483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.458662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.458778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.458793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.464123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.464222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.471135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.471285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.471302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.478566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.478761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.478777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.485578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.485769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.485786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.493460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.493677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.493693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.715 [2024-11-20 06:45:00.500857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.715 [2024-11-20 06:45:00.501029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.715 [2024-11-20 06:45:00.501045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.507967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.508155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.508171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.515581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.515783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.515807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.522840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.523064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.523080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.530139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.530331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.530347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.537478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.537680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.537696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.545191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.545362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.545378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.552719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.552924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.552944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.560521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.560688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.560704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.567208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.567323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.567339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.571104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.571230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.571246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.574398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.574521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.574540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.577549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.577651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.577666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.580756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.580861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.580877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.583782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.583893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.583909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.586978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.587089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.587115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.590113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.590229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.590245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.593083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.593187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.593203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.596137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.596243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.596259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.599272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.599372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.599388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.602929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.603048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.603064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.605973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.606088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.716 [2024-11-20 06:45:00.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.716 [2024-11-20 06:45:00.609201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.716 [2024-11-20 06:45:00.609372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.717 [2024-11-20 06:45:00.609395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.717 [2024-11-20 06:45:00.612321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.717 [2024-11-20 06:45:00.612437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.717 [2024-11-20 06:45:00.612453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.717 [2024-11-20 06:45:00.615437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.717 [2024-11-20 06:45:00.615522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.717 [2024-11-20 06:45:00.615538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.717 [2024-11-20 06:45:00.620139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.717 [2024-11-20 06:45:00.620256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.717 [2024-11-20 06:45:00.620271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.717 [2024-11-20 06:45:00.624464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.717 [2024-11-20 06:45:00.624631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.717 [2024-11-20 06:45:00.624649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.717 [2024-11-20 06:45:00.628216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.717 [2024-11-20 06:45:00.628272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.717 [2024-11-20 06:45:00.628297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.631632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.631707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.631726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.634933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.634986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.635001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.638148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.638218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.638236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.641378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.641423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.641440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.644975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.645024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.645048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.648277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.648332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.648352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.651663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.651727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.651755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.655207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.655279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.655298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.658070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.658150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.661216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.661281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.661297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.664416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.664489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.664508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.667908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.667977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.667999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.671050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.671098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.671119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.674246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.674392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.674407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.677659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.677766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.677785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.683362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.683468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.683483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.686735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.686810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.686828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.690296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.690341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.690360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.693741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.693792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.693811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.697454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.986 [2024-11-20 06:45:00.697507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.986 [2024-11-20 06:45:00.697527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.986 [2024-11-20 06:45:00.702760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.702849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.702864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.706554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.706604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.706619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.710254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.710310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.710328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.713090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.713149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.713169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.715850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.715928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.715944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.718571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.718635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.718661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.721382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.721464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.721479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.725858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.725948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.725963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.728707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.728789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.728805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.731334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.731411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.731430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.733942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.734013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.734031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.736563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.736640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.736655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.739173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.739230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.739249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.743573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.743692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.743708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.747858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.747913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.747932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.751567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.751628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.751651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.754277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.754345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.754360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.756928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.756995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.759527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.759605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.759620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.762145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.762215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.762233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.764811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.764879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.764903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.767396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.767471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.767491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.769983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.770076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.770091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.772700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.772806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.772822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.775514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.775566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.775581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.778387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.987 [2024-11-20 06:45:00.778437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.987 [2024-11-20 06:45:00.778452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.987 [2024-11-20 06:45:00.781629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.781688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.781708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.785521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.785597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.785612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.789628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.789696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.789715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.792358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.792433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.795101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.795146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.795161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.798011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.798068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.798084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.800856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.800938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.800954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.803473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.803551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.803566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.806223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.806344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.806364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.809240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.809344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.809359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.814506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.814670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.814685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.820219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.820317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.820332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.826555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.826772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.826788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.833119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.833189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.833204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.839294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.839445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.839460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.846688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.846790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.846805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.853652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.853743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.853763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.861379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.861477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.861492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.867682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.867879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.867894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.874846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.875018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.875033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.882160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.882247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.882265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:40.988 [2024-11-20 06:45:00.889884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:40.988 [2024-11-20 06:45:00.890051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.988 [2024-11-20 06:45:00.890066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.897101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.897231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.897247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.904528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.904679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.912173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.912253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.912268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.918926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.919084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.919099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.926395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.926598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.926613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.932219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.932279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.932295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.935581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.935636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.935656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.938309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.938375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.938398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.312 [2024-11-20 06:45:00.941022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.312 [2024-11-20 06:45:00.941085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.312 [2024-11-20 06:45:00.941111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.944061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.944166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.948023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.948120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.948135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.955242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.955332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.959231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.959337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.959353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.962590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.962713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.962729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.965349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.965429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.965445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.967985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.968058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.968080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.970673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.970752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.970769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.973307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.973382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.973397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.975924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.975998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.976016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.978541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.978619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.978636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.981184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.981258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.981273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.984060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.984135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.984150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.986673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.986754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.986771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.989277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.989355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.989374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.991870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.991950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.991970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.994437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.994511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.994527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:00.997524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:00.997603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:00.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:01.000676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.000754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.000779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:01.003262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.003340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.003357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:01.005835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.005916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.005933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:01.008453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.008532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.008547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:01.011176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.011258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.011274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:01.015155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.015226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.015242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:41.313 [2024-11-20 06:45:01.020538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.020614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.020630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:41.313 7915.50 IOPS, 989.44 MiB/s [2024-11-20T05:45:01.233Z] [2024-11-20 06:45:01.025665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b0a90) with pdu=0x200016eff3c8 00:33:41.313 [2024-11-20 06:45:01.025761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:41.313 [2024-11-20 06:45:01.025777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:41.313 00:33:41.313 Latency(us) 00:33:41.313 [2024-11-20T05:45:01.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.313 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:41.314 nvme0n1 : 2.00 7911.39 988.92 0.00 0.00 2018.57 1215.15 8192.00 00:33:41.314 [2024-11-20T05:45:01.234Z] =================================================================================================================== 00:33:41.314 [2024-11-20T05:45:01.234Z] Total : 7911.39 988.92 0.00 0.00 2018.57 1215.15 8192.00 00:33:41.314 { 00:33:41.314 "results": [ 00:33:41.314 { 00:33:41.314 "job": "nvme0n1", 00:33:41.314 "core_mask": "0x2", 00:33:41.314 "workload": "randwrite", 00:33:41.314 "status": "finished", 00:33:41.314 "queue_depth": 16, 00:33:41.314 "io_size": 131072, 00:33:41.314 "runtime": 2.00344, 00:33:41.314 "iops": 7911.392405063291, 00:33:41.314 "mibps": 988.9240506329114, 00:33:41.314 "io_failed": 0, 00:33:41.314 "io_timeout": 0, 00:33:41.314 "avg_latency_us": 2018.5662990536275, 00:33:41.314 "min_latency_us": 1215.1466666666668, 00:33:41.314 "max_latency_us": 8192.0 00:33:41.314 } 00:33:41.314 ], 00:33:41.314 "core_count": 1 00:33:41.314 } 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:41.314 | .driver_specific 00:33:41.314 | .nvme_error 00:33:41.314 | .status_code 00:33:41.314 | .command_transient_transport_error' 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 512 > 0 )) 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2892472 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2892472 ']' 00:33:41.314 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2892472 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2892472 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2892472' 00:33:41.628 killing process with pid 2892472 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2892472 00:33:41.628 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.628 00:33:41.628 Latency(us) 00:33:41.628 [2024-11-20T05:45:01.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.628 [2024-11-20T05:45:01.548Z] =================================================================================================================== 00:33:41.628 [2024-11-20T05:45:01.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2892472 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2890149 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2890149 ']' 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2890149 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2890149 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2890149' 00:33:41.628 killing process with pid 2890149 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2890149 00:33:41.628 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2890149 00:33:41.912 00:33:41.912 real 0m16.206s 00:33:41.912 user 0m31.834s 00:33:41.912 sys 0m3.712s 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.912 ************************************ 00:33:41.912 END TEST nvmf_digest_error 00:33:41.912 ************************************ 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.912 rmmod nvme_tcp 00:33:41.912 rmmod nvme_fabrics 00:33:41.912 rmmod nvme_keyring 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2890149 ']' 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2890149 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 2890149 ']' 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 2890149 00:33:41.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2890149) - No such process 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 2890149 is not found' 00:33:41.912 Process with pid 2890149 is not found 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.912 06:45:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.458 00:33:44.458 real 0m43.099s 00:33:44.458 user 1m7.115s 00:33:44.458 sys 0m13.366s 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:44.458 ************************************ 00:33:44.458 END TEST nvmf_digest 00:33:44.458 ************************************ 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:44.458 ************************************ 00:33:44.458 START TEST nvmf_bdevperf 00:33:44.458 ************************************ 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:44.458 * Looking for test storage... 00:33:44.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:33:44.458 06:45:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:44.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.458 --rc genhtml_branch_coverage=1 00:33:44.458 --rc genhtml_function_coverage=1 00:33:44.458 --rc genhtml_legend=1 00:33:44.458 --rc geninfo_all_blocks=1 00:33:44.458 --rc geninfo_unexecuted_blocks=1 00:33:44.458 00:33:44.458 ' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:44.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.458 --rc genhtml_branch_coverage=1 00:33:44.458 --rc genhtml_function_coverage=1 00:33:44.458 --rc genhtml_legend=1 00:33:44.458 --rc geninfo_all_blocks=1 00:33:44.458 --rc geninfo_unexecuted_blocks=1 00:33:44.458 00:33:44.458 ' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:44.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.458 --rc genhtml_branch_coverage=1 00:33:44.458 --rc genhtml_function_coverage=1 00:33:44.458 --rc genhtml_legend=1 00:33:44.458 --rc geninfo_all_blocks=1 00:33:44.458 --rc geninfo_unexecuted_blocks=1 00:33:44.458 00:33:44.458 ' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:44.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:44.458 --rc genhtml_branch_coverage=1 00:33:44.458 --rc genhtml_function_coverage=1 00:33:44.458 --rc genhtml_legend=1 00:33:44.458 --rc geninfo_all_blocks=1 00:33:44.458 --rc geninfo_unexecuted_blocks=1 00:33:44.458 00:33:44.458 ' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.458 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:44.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:33:44.459 06:45:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:52.600 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:52.601 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:52.601 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:52.601 Found net devices under 0000:31:00.0: cvl_0_0 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:52.601 Found net devices under 0000:31:00.1: cvl_0_1 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:52.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:33:52.601 00:33:52.601 --- 10.0.0.2 ping statistics --- 00:33:52.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.601 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:33:52.601 00:33:52.601 --- 10.0.0.1 ping statistics --- 00:33:52.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.601 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2898062 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2898062 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2898062 ']' 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:52.601 06:45:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.601 [2024-11-20 06:45:11.865189] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:52.601 [2024-11-20 06:45:11.865253] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.601 [2024-11-20 06:45:11.964398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:52.601 [2024-11-20 06:45:12.017402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.601 [2024-11-20 06:45:12.017451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.602 [2024-11-20 06:45:12.017460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.602 [2024-11-20 06:45:12.017468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.602 [2024-11-20 06:45:12.017474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.602 [2024-11-20 06:45:12.019561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.602 [2024-11-20 06:45:12.019726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.602 [2024-11-20 06:45:12.019727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.862 [2024-11-20 06:45:12.728850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.862 Malloc0 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:52.862 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.863 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:53.124 [2024-11-20 06:45:12.804038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.124 { 00:33:53.124 "params": { 00:33:53.124 "name": "Nvme$subsystem", 00:33:53.124 "trtype": "$TEST_TRANSPORT", 00:33:53.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.124 "adrfam": "ipv4", 00:33:53.124 "trsvcid": "$NVMF_PORT", 00:33:53.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.124 "hdgst": ${hdgst:-false}, 00:33:53.124 "ddgst": ${ddgst:-false} 00:33:53.124 }, 00:33:53.124 "method": "bdev_nvme_attach_controller" 00:33:53.124 } 00:33:53.124 EOF 00:33:53.124 )") 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:53.124 06:45:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:53.125 "params": { 00:33:53.125 "name": "Nvme1", 00:33:53.125 "trtype": "tcp", 00:33:53.125 "traddr": "10.0.0.2", 00:33:53.125 "adrfam": "ipv4", 00:33:53.125 "trsvcid": "4420", 00:33:53.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.125 "hdgst": false, 00:33:53.125 "ddgst": false 00:33:53.125 }, 00:33:53.125 "method": "bdev_nvme_attach_controller" 00:33:53.125 }' 00:33:53.125 [2024-11-20 06:45:12.862934] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:53.125 [2024-11-20 06:45:12.863014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898438 ] 00:33:53.125 [2024-11-20 06:45:12.959028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.125 [2024-11-20 06:45:13.011588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.697 Running I/O for 1 seconds... 00:33:54.640 8724.00 IOPS, 34.08 MiB/s 00:33:54.640 Latency(us) 00:33:54.640 [2024-11-20T05:45:14.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.640 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:54.640 Verification LBA range: start 0x0 length 0x4000 00:33:54.640 Nvme1n1 : 1.01 8753.25 34.19 0.00 0.00 14562.71 2798.93 13325.65 00:33:54.640 [2024-11-20T05:45:14.560Z] =================================================================================================================== 00:33:54.640 [2024-11-20T05:45:14.560Z] Total : 8753.25 34.19 0.00 0.00 14562.71 2798.93 13325.65 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2898712 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.640 { 00:33:54.640 "params": { 00:33:54.640 "name": "Nvme$subsystem", 00:33:54.640 "trtype": "$TEST_TRANSPORT", 00:33:54.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.640 "adrfam": "ipv4", 00:33:54.640 "trsvcid": "$NVMF_PORT", 00:33:54.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.640 "hdgst": ${hdgst:-false}, 00:33:54.640 "ddgst": ${ddgst:-false} 00:33:54.640 }, 00:33:54.640 "method": "bdev_nvme_attach_controller" 00:33:54.640 } 00:33:54.640 EOF 00:33:54.640 )") 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:54.640 06:45:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:54.640 "params": { 00:33:54.640 "name": "Nvme1", 00:33:54.640 "trtype": "tcp", 00:33:54.640 "traddr": "10.0.0.2", 00:33:54.640 "adrfam": "ipv4", 00:33:54.640 "trsvcid": "4420", 00:33:54.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.640 "hdgst": false, 00:33:54.640 "ddgst": false 00:33:54.640 }, 00:33:54.640 "method": "bdev_nvme_attach_controller" 00:33:54.640 }' 00:33:54.640 [2024-11-20 06:45:14.524675] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:54.640 [2024-11-20 06:45:14.524762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898712 ] 00:33:54.901 [2024-11-20 06:45:14.618437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.901 [2024-11-20 06:45:14.670637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.162 Running I/O for 15 seconds... 00:33:57.046 9618.00 IOPS, 37.57 MiB/s [2024-11-20T05:45:17.538Z] 10400.00 IOPS, 40.62 MiB/s [2024-11-20T05:45:17.538Z] 06:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2898062 00:33:57.618 06:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:57.618 [2024-11-20 06:45:17.485999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.618 [2024-11-20 06:45:17.486577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.618 [2024-11-20 06:45:17.486584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.486988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.486995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.619 [2024-11-20 06:45:17.487251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.619 [2024-11-20 06:45:17.487260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.620 [2024-11-20 06:45:17.487964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.620 [2024-11-20 06:45:17.487981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.487991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.620 [2024-11-20 06:45:17.487999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.620 [2024-11-20 06:45:17.488008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.620 [2024-11-20 06:45:17.488015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.621 [2024-11-20 06:45:17.488032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.621 [2024-11-20 06:45:17.488049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.621 [2024-11-20 06:45:17.488065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.621 [2024-11-20 06:45:17.488082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.621 [2024-11-20 06:45:17.488360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.488368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37550 is same with the state(6) to be set 00:33:57.621 [2024-11-20 06:45:17.488378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:57.621 [2024-11-20 06:45:17.488384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:57.621 [2024-11-20 06:45:17.488391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92648 len:8 PRP1 0x0 PRP2 0x0 00:33:57.621 [2024-11-20 06:45:17.488399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:57.621 [2024-11-20 06:45:17.492037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.621 [2024-11-20 06:45:17.492093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.621 [2024-11-20 06:45:17.492725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.621 [2024-11-20 06:45:17.492742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.621 [2024-11-20 06:45:17.492756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.621 [2024-11-20 06:45:17.492974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.621 [2024-11-20 06:45:17.493192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.621 [2024-11-20 06:45:17.493201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.621 [2024-11-20 06:45:17.493210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.621 [2024-11-20 06:45:17.493218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.621 [2024-11-20 06:45:17.506048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.621 [2024-11-20 06:45:17.506587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.621 [2024-11-20 06:45:17.506606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.621 [2024-11-20 06:45:17.506614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.621 [2024-11-20 06:45:17.506840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.621 [2024-11-20 06:45:17.507058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.621 [2024-11-20 06:45:17.507066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.621 [2024-11-20 06:45:17.507073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.621 [2024-11-20 06:45:17.507080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.621 [2024-11-20 06:45:17.519881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.621 [2024-11-20 06:45:17.520395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.621 [2024-11-20 06:45:17.520412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.621 [2024-11-20 06:45:17.520427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.621 [2024-11-20 06:45:17.520642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.621 [2024-11-20 06:45:17.520864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.621 [2024-11-20 06:45:17.520874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.621 [2024-11-20 06:45:17.520882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.621 [2024-11-20 06:45:17.520888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.884 [2024-11-20 06:45:17.533677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.884 [2024-11-20 06:45:17.534317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.884 [2024-11-20 06:45:17.534357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.884 [2024-11-20 06:45:17.534368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.884 [2024-11-20 06:45:17.534606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.884 [2024-11-20 06:45:17.534835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.884 [2024-11-20 06:45:17.534845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.884 [2024-11-20 06:45:17.534853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.884 [2024-11-20 06:45:17.534861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.884 [2024-11-20 06:45:17.547424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.884 [2024-11-20 06:45:17.548092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.884 [2024-11-20 06:45:17.548132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.884 [2024-11-20 06:45:17.548144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.884 [2024-11-20 06:45:17.548382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.884 [2024-11-20 06:45:17.548602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.884 [2024-11-20 06:45:17.548611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.884 [2024-11-20 06:45:17.548619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.884 [2024-11-20 06:45:17.548627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.884 [2024-11-20 06:45:17.561203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.884 [2024-11-20 06:45:17.561951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.884 [2024-11-20 06:45:17.561993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.884 [2024-11-20 06:45:17.562005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.884 [2024-11-20 06:45:17.562242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.884 [2024-11-20 06:45:17.562468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.884 [2024-11-20 06:45:17.562477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.884 [2024-11-20 06:45:17.562485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.884 [2024-11-20 06:45:17.562493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.884 [2024-11-20 06:45:17.575085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.884 [2024-11-20 06:45:17.575725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.884 [2024-11-20 06:45:17.575775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.884 [2024-11-20 06:45:17.575787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.884 [2024-11-20 06:45:17.576025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.884 [2024-11-20 06:45:17.576245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.884 [2024-11-20 06:45:17.576253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.884 [2024-11-20 06:45:17.576262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.884 [2024-11-20 06:45:17.576271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.884 [2024-11-20 06:45:17.588845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.884 [2024-11-20 06:45:17.589529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.589573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.589584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.589832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.590054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.590063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.590070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.590079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.602651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.603261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.603283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.603292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.603509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.603725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.603733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.603752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.603760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.616527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.617074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.617094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.617102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.617318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.617535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.617544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.617552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.617559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.630368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.631049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.631099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.631111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.631353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.631575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.631584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.631592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.631601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.644204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.644710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.644734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.644742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.644968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.645186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.645195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.645203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.645210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.658003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.658579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.658601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.658609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.658836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.659054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.659064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.659071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.659079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.671787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.672377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.672435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.672447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.672695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.672936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.672948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.672956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.672965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.685580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.686230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.686259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.686268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.686488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.686706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.686716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.686724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.686731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.699355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.699957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.699983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.699998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.700217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.700434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.700447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.700454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.885 [2024-11-20 06:45:17.700461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.885 [2024-11-20 06:45:17.713280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.885 [2024-11-20 06:45:17.714083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.885 [2024-11-20 06:45:17.714145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.885 [2024-11-20 06:45:17.714157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.885 [2024-11-20 06:45:17.714410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.885 [2024-11-20 06:45:17.714636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.885 [2024-11-20 06:45:17.714646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.885 [2024-11-20 06:45:17.714655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.886 [2024-11-20 06:45:17.714664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.886 [2024-11-20 06:45:17.727176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.886 [2024-11-20 06:45:17.727857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.886 [2024-11-20 06:45:17.727921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.886 [2024-11-20 06:45:17.727934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.886 [2024-11-20 06:45:17.728187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.886 [2024-11-20 06:45:17.728411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.886 [2024-11-20 06:45:17.728420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.886 [2024-11-20 06:45:17.728428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.886 [2024-11-20 06:45:17.728438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.886 [2024-11-20 06:45:17.741091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.886 [2024-11-20 06:45:17.741807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.886 [2024-11-20 06:45:17.741871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.886 [2024-11-20 06:45:17.741885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.886 [2024-11-20 06:45:17.742138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.886 [2024-11-20 06:45:17.742370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.886 [2024-11-20 06:45:17.742387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.886 [2024-11-20 06:45:17.742396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.886 [2024-11-20 06:45:17.742407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.886 [2024-11-20 06:45:17.755038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.886 [2024-11-20 06:45:17.755630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.886 [2024-11-20 06:45:17.755659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.886 [2024-11-20 06:45:17.755668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.886 [2024-11-20 06:45:17.755897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.886 [2024-11-20 06:45:17.756115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.886 [2024-11-20 06:45:17.756124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.886 [2024-11-20 06:45:17.756132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.886 [2024-11-20 06:45:17.756141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.886 [2024-11-20 06:45:17.768971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.886 [2024-11-20 06:45:17.769589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.886 [2024-11-20 06:45:17.769614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.886 [2024-11-20 06:45:17.769623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.886 [2024-11-20 06:45:17.769853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.886 [2024-11-20 06:45:17.770073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.886 [2024-11-20 06:45:17.770083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.886 [2024-11-20 06:45:17.770091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.886 [2024-11-20 06:45:17.770099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.886 [2024-11-20 06:45:17.782917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.886 [2024-11-20 06:45:17.783489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.886 [2024-11-20 06:45:17.783512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.886 [2024-11-20 06:45:17.783520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.886 [2024-11-20 06:45:17.783737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.886 [2024-11-20 06:45:17.783964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.886 [2024-11-20 06:45:17.783977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.886 [2024-11-20 06:45:17.783985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.886 [2024-11-20 06:45:17.783999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:57.886 [2024-11-20 06:45:17.796820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:57.886 [2024-11-20 06:45:17.797396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.886 [2024-11-20 06:45:17.797419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:57.886 [2024-11-20 06:45:17.797428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:57.886 [2024-11-20 06:45:17.797646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:57.886 [2024-11-20 06:45:17.797874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:57.886 [2024-11-20 06:45:17.797883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:57.886 [2024-11-20 06:45:17.797891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:57.886 [2024-11-20 06:45:17.797899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.810697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.811265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.811289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.811297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.811514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.811734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.811744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.811761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.811768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.824552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.825056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.825079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.825087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.825304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.825523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.825533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.825541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.825548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.838363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.839100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.839162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.839174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.839426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.839651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.839660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.839669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.839679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.852313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.852889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.852952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.852967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.853227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.853451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.853461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.853469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.853478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 9281.67 IOPS, 36.26 MiB/s [2024-11-20T05:45:18.069Z] [2024-11-20 06:45:17.866083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.866557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.866587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.866596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.866826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.867058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.867068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.867076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.867084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.879887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.880447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.880472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.880488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.880706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.880932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.880941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.880948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.880956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.893761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.894413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.894475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.894488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.894740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.894977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.894987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.894995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.895005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.907602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.908312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.908374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.908389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.908641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.149 [2024-11-20 06:45:17.908874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.149 [2024-11-20 06:45:17.908884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.149 [2024-11-20 06:45:17.908893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.149 [2024-11-20 06:45:17.908902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.149 [2024-11-20 06:45:17.921375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.149 [2024-11-20 06:45:17.921980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.149 [2024-11-20 06:45:17.922010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.149 [2024-11-20 06:45:17.922019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.149 [2024-11-20 06:45:17.922238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:17.922465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:17.922477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:17.922485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:17.922493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:17.935318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:17.935890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:17.935918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:17.935926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:17.936146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:17.936364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:17.936374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:17.936382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:17.936389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:17.949188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:17.949763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:17.949788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:17.949797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:17.950015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:17.950233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:17.950243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:17.950251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:17.950258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:17.963051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:17.963612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:17.963636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:17.963644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:17.963870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:17.964089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:17.964098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:17.964113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:17.964122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:17.976939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:17.977640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:17.977702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:17.977715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:17.977979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:17.978205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:17.978214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:17.978223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:17.978232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:17.990845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:17.991468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:17.991496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:17.991505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:17.991723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:17.991951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:17.991960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:17.991969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:17.991977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:18.004797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:18.005398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:18.005423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:18.005431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:18.005647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:18.005875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:18.005885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:18.005893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:18.005901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:18.018705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:18.019205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:18.019229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:18.019238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:18.019455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:18.019674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:18.019687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:18.019696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:18.019703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:18.032555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:18.033255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:18.033317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:18.033330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:18.033582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:18.033820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:18.033834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:18.033845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:18.033858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:18.046492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:18.047189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:18.047251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:18.047264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.150 [2024-11-20 06:45:18.047516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.150 [2024-11-20 06:45:18.047741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.150 [2024-11-20 06:45:18.047765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.150 [2024-11-20 06:45:18.047775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.150 [2024-11-20 06:45:18.047785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.150 [2024-11-20 06:45:18.060401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.150 [2024-11-20 06:45:18.061192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.150 [2024-11-20 06:45:18.061254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.150 [2024-11-20 06:45:18.061274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.151 [2024-11-20 06:45:18.061526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.151 [2024-11-20 06:45:18.061765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.151 [2024-11-20 06:45:18.061776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.151 [2024-11-20 06:45:18.061784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.151 [2024-11-20 06:45:18.061793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.413 [2024-11-20 06:45:18.074224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.413 [2024-11-20 06:45:18.074826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.413 [2024-11-20 06:45:18.074856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.413 [2024-11-20 06:45:18.074865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.413 [2024-11-20 06:45:18.075087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.413 [2024-11-20 06:45:18.075305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.413 [2024-11-20 06:45:18.075314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.413 [2024-11-20 06:45:18.075322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.413 [2024-11-20 06:45:18.075329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.413 [2024-11-20 06:45:18.088140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.413 [2024-11-20 06:45:18.088725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.413 [2024-11-20 06:45:18.088758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.413 [2024-11-20 06:45:18.088767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.413 [2024-11-20 06:45:18.088985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.413 [2024-11-20 06:45:18.089203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.413 [2024-11-20 06:45:18.089214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.413 [2024-11-20 06:45:18.089221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.413 [2024-11-20 06:45:18.089229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.413 [2024-11-20 06:45:18.102029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.413 [2024-11-20 06:45:18.102467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.413 [2024-11-20 06:45:18.102494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.413 [2024-11-20 06:45:18.102503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.413 [2024-11-20 06:45:18.102723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.413 [2024-11-20 06:45:18.102958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.413 [2024-11-20 06:45:18.102968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.413 [2024-11-20 06:45:18.102976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.413 [2024-11-20 06:45:18.102983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.413 [2024-11-20 06:45:18.115791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.413 [2024-11-20 06:45:18.116504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.116567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.116580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.116843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.117068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.117078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.117086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.117095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.129697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.130443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.130506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.130519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.130783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.131007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.131016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.131025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.131034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.143643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.144237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.144300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.144313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.144564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.144800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.144811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.144826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.144836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.157459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.158076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.158139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.158152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.158403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.158627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.158636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.158645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.158654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.171281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.171863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.171940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.172193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.172417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.172427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.172436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.172445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.185077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.185788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.185850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.185865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.186117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.186340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.186350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.186359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.186368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.198985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.199719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.199791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.199804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.200056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.200279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.200288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.200297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.200306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.212912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.213532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.213594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.213607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.213873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.214098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.214107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.214116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.214125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.226719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.227417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.227478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.227491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.227742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.227980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.227990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.227999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.228008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.414 [2024-11-20 06:45:18.240633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.414 [2024-11-20 06:45:18.241328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.414 [2024-11-20 06:45:18.241389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.414 [2024-11-20 06:45:18.241409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.414 [2024-11-20 06:45:18.241661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.414 [2024-11-20 06:45:18.241901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.414 [2024-11-20 06:45:18.241912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.414 [2024-11-20 06:45:18.241921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.414 [2024-11-20 06:45:18.241931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.415 [2024-11-20 06:45:18.254541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.415 [2024-11-20 06:45:18.255072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.415 [2024-11-20 06:45:18.255102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.415 [2024-11-20 06:45:18.255111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.415 [2024-11-20 06:45:18.255331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.415 [2024-11-20 06:45:18.255550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.415 [2024-11-20 06:45:18.255560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.415 [2024-11-20 06:45:18.255569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.415 [2024-11-20 06:45:18.255578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.415 [2024-11-20 06:45:18.268398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.415 [2024-11-20 06:45:18.269113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.415 [2024-11-20 06:45:18.269176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.415 [2024-11-20 06:45:18.269188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.415 [2024-11-20 06:45:18.269440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.415 [2024-11-20 06:45:18.269663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.415 [2024-11-20 06:45:18.269672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.415 [2024-11-20 06:45:18.269680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.415 [2024-11-20 06:45:18.269689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.415 [2024-11-20 06:45:18.282329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.415 [2024-11-20 06:45:18.282909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.415 [2024-11-20 06:45:18.282972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.415 [2024-11-20 06:45:18.282986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.415 [2024-11-20 06:45:18.283239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.415 [2024-11-20 06:45:18.283470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.415 [2024-11-20 06:45:18.283480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.415 [2024-11-20 06:45:18.283488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.415 [2024-11-20 06:45:18.283498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.415 [2024-11-20 06:45:18.296108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.415 [2024-11-20 06:45:18.296783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.415 [2024-11-20 06:45:18.296845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.415 [2024-11-20 06:45:18.296858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.415 [2024-11-20 06:45:18.297110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.415 [2024-11-20 06:45:18.297334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.415 [2024-11-20 06:45:18.297343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.415 [2024-11-20 06:45:18.297352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.415 [2024-11-20 06:45:18.297361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.415 [2024-11-20 06:45:18.309979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.415 [2024-11-20 06:45:18.310685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.415 [2024-11-20 06:45:18.310759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.415 [2024-11-20 06:45:18.310773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.415 [2024-11-20 06:45:18.311024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.415 [2024-11-20 06:45:18.311249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.415 [2024-11-20 06:45:18.311258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.415 [2024-11-20 06:45:18.311266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.415 [2024-11-20 06:45:18.311276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.415 [2024-11-20 06:45:18.323874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.415 [2024-11-20 06:45:18.324618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.415 [2024-11-20 06:45:18.324680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.415 [2024-11-20 06:45:18.324693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.415 [2024-11-20 06:45:18.324958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.415 [2024-11-20 06:45:18.325182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.415 [2024-11-20 06:45:18.325191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.415 [2024-11-20 06:45:18.325207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.415 [2024-11-20 06:45:18.325216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.677 [2024-11-20 06:45:18.337851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.338542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.338604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.338617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.338883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.339108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.339117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.339125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.339134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.351717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.352451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.352513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.352526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.352793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.353017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.353026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.353034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.353043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.365648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.366334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.366396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.366409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.366660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.366896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.366907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.366916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.366925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.379557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.380154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.380183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.380192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.380412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.380631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.380640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.380648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.380656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.393476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.394154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.394216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.394229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.394481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.394704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.394713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.394721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.394730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.407598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.408200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.408230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.408239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.408460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.408679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.408689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.408697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.408705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.421491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.422067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.422094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.422110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.422330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.422548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.422558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.422566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.422573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.435365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.436026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.436087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.436100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.436351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.436574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.436584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.436592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.436601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.449196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.449883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.449945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.449958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.450210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.450433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.678 [2024-11-20 06:45:18.450442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.678 [2024-11-20 06:45:18.450451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.678 [2024-11-20 06:45:18.450460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.678 [2024-11-20 06:45:18.463066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.678 [2024-11-20 06:45:18.463741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-11-20 06:45:18.463814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.678 [2024-11-20 06:45:18.463827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.678 [2024-11-20 06:45:18.464079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.678 [2024-11-20 06:45:18.464310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.464319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.464327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.464336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.476943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.477629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.477691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.477704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.477969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.478195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.478204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.478212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.478221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.490847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.491547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.491609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.491622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.491889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.492114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.492125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.492134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.492143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.504830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.505351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.505384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.505394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.505617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.505848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.505859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.505875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.505883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.518679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.519458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.519516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.519529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.519788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.520012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.520021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.520029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.520038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.532647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.533354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.533416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.533429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.533681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.533920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.533930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.533939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.533948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.546527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.547208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.547269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.547282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.547534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.547772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.547782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.547790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.547799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.559231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.559773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.559802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.559809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.559964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.560115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.560122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.560128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.560134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.571895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.572471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.572518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.572527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.572703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.572869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.572876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.572882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.572889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.679 [2024-11-20 06:45:18.584491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.679 [2024-11-20 06:45:18.585132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-11-20 06:45:18.585177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.679 [2024-11-20 06:45:18.585186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.679 [2024-11-20 06:45:18.585360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.679 [2024-11-20 06:45:18.585514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.679 [2024-11-20 06:45:18.585520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.679 [2024-11-20 06:45:18.585526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.679 [2024-11-20 06:45:18.585532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.942 [2024-11-20 06:45:18.597150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.942 [2024-11-20 06:45:18.597727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.942 [2024-11-20 06:45:18.597776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.942 [2024-11-20 06:45:18.597791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.942 [2024-11-20 06:45:18.597963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.942 [2024-11-20 06:45:18.598116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.942 [2024-11-20 06:45:18.598123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.942 [2024-11-20 06:45:18.598129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.942 [2024-11-20 06:45:18.598135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.942 [2024-11-20 06:45:18.609737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.942 [2024-11-20 06:45:18.610298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.942 [2024-11-20 06:45:18.610338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.942 [2024-11-20 06:45:18.610346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.942 [2024-11-20 06:45:18.610517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.942 [2024-11-20 06:45:18.610670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.942 [2024-11-20 06:45:18.610677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.942 [2024-11-20 06:45:18.610682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.942 [2024-11-20 06:45:18.610689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.942 [2024-11-20 06:45:18.622420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.942 [2024-11-20 06:45:18.622988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.942 [2024-11-20 06:45:18.623024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.942 [2024-11-20 06:45:18.623033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.942 [2024-11-20 06:45:18.623201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.942 [2024-11-20 06:45:18.623353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.942 [2024-11-20 06:45:18.623360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.942 [2024-11-20 06:45:18.623366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.942 [2024-11-20 06:45:18.623371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.942 [2024-11-20 06:45:18.635112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.942 [2024-11-20 06:45:18.635697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.942 [2024-11-20 06:45:18.635732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.942 [2024-11-20 06:45:18.635741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.942 [2024-11-20 06:45:18.635919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.942 [2024-11-20 06:45:18.636075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.942 [2024-11-20 06:45:18.636082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.942 [2024-11-20 06:45:18.636088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.942 [2024-11-20 06:45:18.636094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.942 [2024-11-20 06:45:18.647811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.942 [2024-11-20 06:45:18.648391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.942 [2024-11-20 06:45:18.648425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.942 [2024-11-20 06:45:18.648433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.942 [2024-11-20 06:45:18.648600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.942 [2024-11-20 06:45:18.648760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.942 [2024-11-20 06:45:18.648767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.942 [2024-11-20 06:45:18.648773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.942 [2024-11-20 06:45:18.648779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.942 [2024-11-20 06:45:18.660487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.942 [2024-11-20 06:45:18.661078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.942 [2024-11-20 06:45:18.661111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.942 [2024-11-20 06:45:18.661120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.942 [2024-11-20 06:45:18.661286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.942 [2024-11-20 06:45:18.661438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.942 [2024-11-20 06:45:18.661444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.661450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.661455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.673187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.673649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.673681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.673689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.673864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.674016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.674022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.674031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.674037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.685891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.686461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.686491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.686500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.686664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.686822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.686829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.686834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.686840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.698581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.699161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.699190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.699199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.699363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.699514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.699520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.699526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.699532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.711252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.711842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.711872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.711880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.712047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.712198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.712204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.712209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.712215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.723828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.724407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.724438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.724446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.724610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.724769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.724776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.724782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.724787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.736505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.736949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.736979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.736988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.737152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.737303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.737309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.737314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.737320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.749278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.749850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.749880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.749889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.750056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.750207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.750213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.750219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.750225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.761954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.762531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.762560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.762573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.762737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.762895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.762902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.762908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.762914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.774631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.775186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.775216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.775225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.775389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.943 [2024-11-20 06:45:18.775540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.943 [2024-11-20 06:45:18.775546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.943 [2024-11-20 06:45:18.775552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.943 [2024-11-20 06:45:18.775557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.943 [2024-11-20 06:45:18.787272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.943 [2024-11-20 06:45:18.787867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.943 [2024-11-20 06:45:18.787897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.943 [2024-11-20 06:45:18.787906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.943 [2024-11-20 06:45:18.788069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.944 [2024-11-20 06:45:18.788221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.944 [2024-11-20 06:45:18.788227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.944 [2024-11-20 06:45:18.788233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.944 [2024-11-20 06:45:18.788238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.944 [2024-11-20 06:45:18.799962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.944 [2024-11-20 06:45:18.800533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.944 [2024-11-20 06:45:18.800564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.944 [2024-11-20 06:45:18.800572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.944 [2024-11-20 06:45:18.800736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.944 [2024-11-20 06:45:18.800898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.944 [2024-11-20 06:45:18.800905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.944 [2024-11-20 06:45:18.800910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.944 [2024-11-20 06:45:18.800916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.944 [2024-11-20 06:45:18.812634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.944 [2024-11-20 06:45:18.813193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.944 [2024-11-20 06:45:18.813223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.944 [2024-11-20 06:45:18.813232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.944 [2024-11-20 06:45:18.813396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.944 [2024-11-20 06:45:18.813547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.944 [2024-11-20 06:45:18.813553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.944 [2024-11-20 06:45:18.813559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.944 [2024-11-20 06:45:18.813565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.944 [2024-11-20 06:45:18.825276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.944 [2024-11-20 06:45:18.825849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.944 [2024-11-20 06:45:18.825879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.944 [2024-11-20 06:45:18.825888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.944 [2024-11-20 06:45:18.826055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.944 [2024-11-20 06:45:18.826206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.944 [2024-11-20 06:45:18.826212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.944 [2024-11-20 06:45:18.826217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.944 [2024-11-20 06:45:18.826223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.944 [2024-11-20 06:45:18.837943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.944 [2024-11-20 06:45:18.838459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.944 [2024-11-20 06:45:18.838489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.944 [2024-11-20 06:45:18.838497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.944 [2024-11-20 06:45:18.838661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.944 [2024-11-20 06:45:18.838820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.944 [2024-11-20 06:45:18.838828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.944 [2024-11-20 06:45:18.838839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.944 [2024-11-20 06:45:18.838845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:58.944 [2024-11-20 06:45:18.850548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:58.944 [2024-11-20 06:45:18.851031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.944 [2024-11-20 06:45:18.851046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:58.944 [2024-11-20 06:45:18.851052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:58.944 [2024-11-20 06:45:18.851201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:58.944 [2024-11-20 06:45:18.851350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:58.944 [2024-11-20 06:45:18.851356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:58.944 [2024-11-20 06:45:18.851361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:58.944 [2024-11-20 06:45:18.851366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.206 6961.25 IOPS, 27.19 MiB/s [2024-11-20T05:45:19.126Z] [2024-11-20 06:45:18.863207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.206 [2024-11-20 06:45:18.863698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.206 [2024-11-20 06:45:18.863711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.206 [2024-11-20 06:45:18.863717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.206 [2024-11-20 06:45:18.863871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.206 [2024-11-20 06:45:18.864020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.206 [2024-11-20 06:45:18.864026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.206 [2024-11-20 06:45:18.864030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.206 [2024-11-20 06:45:18.864035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.206 [2024-11-20 06:45:18.875916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.206 [2024-11-20 06:45:18.876485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.206 [2024-11-20 06:45:18.876515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.206 [2024-11-20 06:45:18.876524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.206 [2024-11-20 06:45:18.876688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.876847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.876855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.876860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.876866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.888580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.889166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.889196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.889205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.889368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.889519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.889525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.889531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.889537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.901248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.901879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.901909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.901917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.902081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.902232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.902239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.902244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.902250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.913829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.914399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.914429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.914437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.914601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.914759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.914766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.914771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.914777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.926485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.927067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.927097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.927108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.927272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.927423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.927430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.927435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.927441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.939165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.939665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.939679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.939685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.939868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.940017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.940023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.940029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.940034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.951872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.952322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.952335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.952340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.952488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.952637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.952642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.952647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.952652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.964507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.964969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.964982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.964988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.965137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.965288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.965294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.965299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.965304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.977163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.977495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.977508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.977513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.977662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.977815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.977821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.977826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.977831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:18.989809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:18.990344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:18.990373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.207 [2024-11-20 06:45:18.990382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.207 [2024-11-20 06:45:18.990546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.207 [2024-11-20 06:45:18.990697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.207 [2024-11-20 06:45:18.990704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.207 [2024-11-20 06:45:18.990709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.207 [2024-11-20 06:45:18.990715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.207 [2024-11-20 06:45:19.002425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.207 [2024-11-20 06:45:19.003049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.207 [2024-11-20 06:45:19.003079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.003088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.003254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.003406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.003412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.003421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.003427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.015001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.015556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.015586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.015595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.015765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.015916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.015922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.015928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.015934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.027638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.028074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.028090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.028095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.028244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.028393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.028398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.028403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.028408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.040253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.040700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.040712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.040717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.040870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.041019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.041024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.041029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.041034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.052882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.053458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.053488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.053497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.053660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.053819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.053826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.053832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.053838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.065571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.066129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.066159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.066168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.066332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.066483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.066489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.066494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.066500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.078270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.078855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.078885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.078894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.079061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.079212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.079219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.079224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.079230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.090964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.091533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.091563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.091575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.091739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.091898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.091905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.091910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.091916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.103639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.104218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.104249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.104258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.104422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.104573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.104579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.104584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.104590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.208 [2024-11-20 06:45:19.116322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.208 [2024-11-20 06:45:19.116803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.208 [2024-11-20 06:45:19.116819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.208 [2024-11-20 06:45:19.116824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.208 [2024-11-20 06:45:19.116974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.208 [2024-11-20 06:45:19.117123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.208 [2024-11-20 06:45:19.117129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.208 [2024-11-20 06:45:19.117133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.208 [2024-11-20 06:45:19.117138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.470 [2024-11-20 06:45:19.129004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.470 [2024-11-20 06:45:19.129487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.470 [2024-11-20 06:45:19.129500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.470 [2024-11-20 06:45:19.129506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.470 [2024-11-20 06:45:19.129654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.470 [2024-11-20 06:45:19.129813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.470 [2024-11-20 06:45:19.129819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.470 [2024-11-20 06:45:19.129824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.470 [2024-11-20 06:45:19.129829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.470 [2024-11-20 06:45:19.141691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.470 [2024-11-20 06:45:19.142270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.470 [2024-11-20 06:45:19.142300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.470 [2024-11-20 06:45:19.142308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.470 [2024-11-20 06:45:19.142472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.470 [2024-11-20 06:45:19.142624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.470 [2024-11-20 06:45:19.142630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.470 [2024-11-20 06:45:19.142635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.470 [2024-11-20 06:45:19.142641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.470 [2024-11-20 06:45:19.154370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.470 [2024-11-20 06:45:19.154953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.470 [2024-11-20 06:45:19.154984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.470 [2024-11-20 06:45:19.154993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.470 [2024-11-20 06:45:19.155158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.470 [2024-11-20 06:45:19.155310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.470 [2024-11-20 06:45:19.155316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.470 [2024-11-20 06:45:19.155321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.155327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.167038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.167610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.167640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.167648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.167818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.167969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.167976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.167985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.167991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.179703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.180100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.180115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.180120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.180270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.180418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.180424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.180429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.180434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.192284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.192645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.192658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.192663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.192816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.192965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.192970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.192976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.192981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.204975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.205307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.205322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.205327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.205477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.205625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.205631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.205636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.205640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.217630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.218164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.218195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.218203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.218367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.218519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.218525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.218530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.218536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.230258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.230756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.230771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.230777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.230926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.231075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.231081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.231085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.231090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.242952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.243321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.243333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.243339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.243487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.243635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.243641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.243645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.243650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.255632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.256082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.256095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.256104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.256252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.256400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.256406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.256411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.256416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.268285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.268788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.268807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.268813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.268966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.269116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.269121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.269126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.269131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.280887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.281344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.281357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.281363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.281511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.281659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.281665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.281669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.281674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.293547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.294156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.294186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.294195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.294361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.294516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.294523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.294528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.294534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.306134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.306699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.306730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.306738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.306912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.307064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.307070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.307076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.471 [2024-11-20 06:45:19.307082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.471 [2024-11-20 06:45:19.318819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.471 [2024-11-20 06:45:19.319408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.471 [2024-11-20 06:45:19.319438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.471 [2024-11-20 06:45:19.319447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.471 [2024-11-20 06:45:19.319610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.471 [2024-11-20 06:45:19.319769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.471 [2024-11-20 06:45:19.319776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.471 [2024-11-20 06:45:19.319782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.472 [2024-11-20 06:45:19.319787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.472 [2024-11-20 06:45:19.331513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.472 [2024-11-20 06:45:19.331980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.472 [2024-11-20 06:45:19.331996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.472 [2024-11-20 06:45:19.332001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.472 [2024-11-20 06:45:19.332150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.472 [2024-11-20 06:45:19.332299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.472 [2024-11-20 06:45:19.332305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.472 [2024-11-20 06:45:19.332313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.472 [2024-11-20 06:45:19.332318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.472 [2024-11-20 06:45:19.344195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.472 [2024-11-20 06:45:19.344640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.472 [2024-11-20 06:45:19.344652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.472 [2024-11-20 06:45:19.344657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.472 [2024-11-20 06:45:19.344809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.472 [2024-11-20 06:45:19.344958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.472 [2024-11-20 06:45:19.344964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.472 [2024-11-20 06:45:19.344969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.472 [2024-11-20 06:45:19.344974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.472 [2024-11-20 06:45:19.356853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.472 [2024-11-20 06:45:19.357303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.472 [2024-11-20 06:45:19.357315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.472 [2024-11-20 06:45:19.357320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.472 [2024-11-20 06:45:19.357468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.472 [2024-11-20 06:45:19.357616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.472 [2024-11-20 06:45:19.357622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.472 [2024-11-20 06:45:19.357627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.472 [2024-11-20 06:45:19.357632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.472 [2024-11-20 06:45:19.369505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.472 [2024-11-20 06:45:19.369953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.472 [2024-11-20 06:45:19.369966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.472 [2024-11-20 06:45:19.369971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.472 [2024-11-20 06:45:19.370118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.472 [2024-11-20 06:45:19.370267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.472 [2024-11-20 06:45:19.370273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.472 [2024-11-20 06:45:19.370278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.472 [2024-11-20 06:45:19.370283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.472 [2024-11-20 06:45:19.382151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.472 [2024-11-20 06:45:19.382631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.472 [2024-11-20 06:45:19.382643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.472 [2024-11-20 06:45:19.382648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.472 [2024-11-20 06:45:19.382801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.472 [2024-11-20 06:45:19.382950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.472 [2024-11-20 06:45:19.382956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.472 [2024-11-20 06:45:19.382961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.472 [2024-11-20 06:45:19.382966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.734 [2024-11-20 06:45:19.394827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.734 [2024-11-20 06:45:19.395272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.734 [2024-11-20 06:45:19.395284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.734 [2024-11-20 06:45:19.395289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.734 [2024-11-20 06:45:19.395438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.734 [2024-11-20 06:45:19.395586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.734 [2024-11-20 06:45:19.395592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.734 [2024-11-20 06:45:19.395597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.734 [2024-11-20 06:45:19.395601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.734 [2024-11-20 06:45:19.407621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.734 [2024-11-20 06:45:19.408175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.734 [2024-11-20 06:45:19.408206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.734 [2024-11-20 06:45:19.408215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.734 [2024-11-20 06:45:19.408379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.734 [2024-11-20 06:45:19.408530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.734 [2024-11-20 06:45:19.408537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.734 [2024-11-20 06:45:19.408542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.734 [2024-11-20 06:45:19.408547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.734 [2024-11-20 06:45:19.420285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.734 [2024-11-20 06:45:19.421589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.734 [2024-11-20 06:45:19.421610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.734 [2024-11-20 06:45:19.421620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.734 [2024-11-20 06:45:19.421782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.734 [2024-11-20 06:45:19.421933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.734 [2024-11-20 06:45:19.421940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.734 [2024-11-20 06:45:19.421945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.734 [2024-11-20 06:45:19.421950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.734 [2024-11-20 06:45:19.433003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.734 [2024-11-20 06:45:19.433488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.734 [2024-11-20 06:45:19.433501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.734 [2024-11-20 06:45:19.433507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.734 [2024-11-20 06:45:19.433656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.734 [2024-11-20 06:45:19.433809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.734 [2024-11-20 06:45:19.433815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.734 [2024-11-20 06:45:19.433820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.734 [2024-11-20 06:45:19.433825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.734 [2024-11-20 06:45:19.445708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.734 [2024-11-20 06:45:19.446162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.734 [2024-11-20 06:45:19.446175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.734 [2024-11-20 06:45:19.446181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.734 [2024-11-20 06:45:19.446329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.734 [2024-11-20 06:45:19.446478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.734 [2024-11-20 06:45:19.446483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.446488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.446493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.458405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.458870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.458883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.458889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.459037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.459189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.459195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.459200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.459205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.471085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.471569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.471582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.471587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.471736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.471890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.471895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.471901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.471905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.483792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.484237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.484250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.484255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.484403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.484551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.484557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.484562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.484566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.496445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.496894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.496907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.496912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.497061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.497209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.497215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.497222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.497227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.509145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.509529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.509542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.509547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.509696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.509850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.509857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.509862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.509866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.521740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.522076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.522090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.522096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.522245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.522393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.522399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.522404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.522409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.534420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.534879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.534892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.534898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.535046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.535194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.535200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.535205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.535210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.547092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.547535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.547547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.547552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.547700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.547854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.547860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.547866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.547871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.559787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.560273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.560286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.560291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.560440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.560588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.735 [2024-11-20 06:45:19.560594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.735 [2024-11-20 06:45:19.560599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.735 [2024-11-20 06:45:19.560603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.735 [2024-11-20 06:45:19.572488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.735 [2024-11-20 06:45:19.572944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.735 [2024-11-20 06:45:19.572958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.735 [2024-11-20 06:45:19.572963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.735 [2024-11-20 06:45:19.573111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.735 [2024-11-20 06:45:19.573259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.736 [2024-11-20 06:45:19.573265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.736 [2024-11-20 06:45:19.573269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.736 [2024-11-20 06:45:19.573274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.736 [2024-11-20 06:45:19.585141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.736 [2024-11-20 06:45:19.585582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.736 [2024-11-20 06:45:19.585594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.736 [2024-11-20 06:45:19.585602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.736 [2024-11-20 06:45:19.585756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.736 [2024-11-20 06:45:19.585905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.736 [2024-11-20 06:45:19.585911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.736 [2024-11-20 06:45:19.585916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.736 [2024-11-20 06:45:19.585921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.736 [2024-11-20 06:45:19.597789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.736 [2024-11-20 06:45:19.598255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.736 [2024-11-20 06:45:19.598268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.736 [2024-11-20 06:45:19.598273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.736 [2024-11-20 06:45:19.598421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.736 [2024-11-20 06:45:19.598569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.736 [2024-11-20 06:45:19.598575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.736 [2024-11-20 06:45:19.598580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.736 [2024-11-20 06:45:19.598584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.736 [2024-11-20 06:45:19.610444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.736 [2024-11-20 06:45:19.610894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.736 [2024-11-20 06:45:19.610907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.736 [2024-11-20 06:45:19.610912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.736 [2024-11-20 06:45:19.611060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.736 [2024-11-20 06:45:19.611208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.736 [2024-11-20 06:45:19.611214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.736 [2024-11-20 06:45:19.611219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.736 [2024-11-20 06:45:19.611224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.736 [2024-11-20 06:45:19.623089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.736 [2024-11-20 06:45:19.623573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.736 [2024-11-20 06:45:19.623585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.736 [2024-11-20 06:45:19.623590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.736 [2024-11-20 06:45:19.623738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.736 [2024-11-20 06:45:19.623894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.736 [2024-11-20 06:45:19.623900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.736 [2024-11-20 06:45:19.623905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.736 [2024-11-20 06:45:19.623910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.736 [2024-11-20 06:45:19.635784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.736 [2024-11-20 06:45:19.636231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.736 [2024-11-20 06:45:19.636244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.736 [2024-11-20 06:45:19.636249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.736 [2024-11-20 06:45:19.636397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.736 [2024-11-20 06:45:19.636545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.736 [2024-11-20 06:45:19.636551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.736 [2024-11-20 06:45:19.636555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.736 [2024-11-20 06:45:19.636560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.736 [2024-11-20 06:45:19.648424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.736 [2024-11-20 06:45:19.648883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.736 [2024-11-20 06:45:19.648896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.736 [2024-11-20 06:45:19.648902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.736 [2024-11-20 06:45:19.649050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.736 [2024-11-20 06:45:19.649198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.736 [2024-11-20 06:45:19.649204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.736 [2024-11-20 06:45:19.649209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.736 [2024-11-20 06:45:19.649213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.998 [2024-11-20 06:45:19.661083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.998 [2024-11-20 06:45:19.661558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.998 [2024-11-20 06:45:19.661570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.998 [2024-11-20 06:45:19.661576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.998 [2024-11-20 06:45:19.661724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.998 [2024-11-20 06:45:19.661877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.998 [2024-11-20 06:45:19.661883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.998 [2024-11-20 06:45:19.661891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.998 [2024-11-20 06:45:19.661896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.998 [2024-11-20 06:45:19.673760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.998 [2024-11-20 06:45:19.674210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.998 [2024-11-20 06:45:19.674222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.998 [2024-11-20 06:45:19.674228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.998 [2024-11-20 06:45:19.674376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.998 [2024-11-20 06:45:19.674524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.998 [2024-11-20 06:45:19.674530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.998 [2024-11-20 06:45:19.674535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.998 [2024-11-20 06:45:19.674539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.998 [2024-11-20 06:45:19.686414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.998 [2024-11-20 06:45:19.686810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.998 [2024-11-20 06:45:19.686823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.998 [2024-11-20 06:45:19.686828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.998 [2024-11-20 06:45:19.686976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.998 [2024-11-20 06:45:19.687124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.998 [2024-11-20 06:45:19.687130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.998 [2024-11-20 06:45:19.687135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.998 [2024-11-20 06:45:19.687139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.998 [2024-11-20 06:45:19.698995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.998 [2024-11-20 06:45:19.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.998 [2024-11-20 06:45:19.699457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.998 [2024-11-20 06:45:19.699463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.998 [2024-11-20 06:45:19.699611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.998 [2024-11-20 06:45:19.699764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.998 [2024-11-20 06:45:19.699770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.998 [2024-11-20 06:45:19.699775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.998 [2024-11-20 06:45:19.699779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.998 [2024-11-20 06:45:19.711635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.998 [2024-11-20 06:45:19.712181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.998 [2024-11-20 06:45:19.712212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.998 [2024-11-20 06:45:19.712221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.998 [2024-11-20 06:45:19.712385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.998 [2024-11-20 06:45:19.712536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.998 [2024-11-20 06:45:19.712542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.712547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.712553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.724291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.724797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.724827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.724836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.725002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.725153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.725161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.725166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.725172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.736904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.737478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.737508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.737516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.737681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.737838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.737845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.737851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.737857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.749565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.750103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.750133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.750146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.750312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.750464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.750470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.750475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.750481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.762205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.762810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.762841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.762850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.763016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.763168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.763174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.763180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.763185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.774879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.775378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.775393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.775398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.775548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.775696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.775702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.775707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.775712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.787564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.788009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.788022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.788028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.788176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.788328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.788334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.788339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.788344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.800202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.800657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.800669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.800674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.800827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.800976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.800981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.800986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.800991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.812845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.813190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.813204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.813210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.813358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.813507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.813512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.813517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.813522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.825520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.825798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.999 [2024-11-20 06:45:19.825810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:33:59.999 [2024-11-20 06:45:19.825816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:33:59.999 [2024-11-20 06:45:19.825964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:33:59.999 [2024-11-20 06:45:19.826113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:59.999 [2024-11-20 06:45:19.826118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:59.999 [2024-11-20 06:45:19.826126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:59.999 [2024-11-20 06:45:19.826131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:59.999 [2024-11-20 06:45:19.838147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:59.999 [2024-11-20 06:45:19.838596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.000 [2024-11-20 06:45:19.838608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.000 [2024-11-20 06:45:19.838613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.000 [2024-11-20 06:45:19.838765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.000 [2024-11-20 06:45:19.838915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.000 [2024-11-20 06:45:19.838920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.000 [2024-11-20 06:45:19.838926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.000 [2024-11-20 06:45:19.838930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.000 [2024-11-20 06:45:19.850791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.000 [2024-11-20 06:45:19.851271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.000 [2024-11-20 06:45:19.851283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.000 [2024-11-20 06:45:19.851288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.000 [2024-11-20 06:45:19.851436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.000 [2024-11-20 06:45:19.851585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.000 [2024-11-20 06:45:19.851590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.000 [2024-11-20 06:45:19.851595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.000 [2024-11-20 06:45:19.851600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.000 5569.00 IOPS, 21.75 MiB/s [2024-11-20T05:45:19.920Z] [2024-11-20 06:45:19.863442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.000 [2024-11-20 06:45:19.863910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.000 [2024-11-20 06:45:19.863940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.000 [2024-11-20 06:45:19.863949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.000 [2024-11-20 06:45:19.864115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.000 [2024-11-20 06:45:19.864267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.000 [2024-11-20 06:45:19.864273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.000 [2024-11-20 06:45:19.864278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.000 [2024-11-20 06:45:19.864284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.000 [2024-11-20 06:45:19.876023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.000 [2024-11-20 06:45:19.876623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.000 [2024-11-20 06:45:19.876653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.000 [2024-11-20 06:45:19.876661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.000 [2024-11-20 06:45:19.876833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.000 [2024-11-20 06:45:19.876985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.000 [2024-11-20 06:45:19.876991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.000 [2024-11-20 06:45:19.876997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.000 [2024-11-20 06:45:19.877002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.000 [2024-11-20 06:45:19.888720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.000 [2024-11-20 06:45:19.889272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.000 [2024-11-20 06:45:19.889302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.000 [2024-11-20 06:45:19.889311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.000 [2024-11-20 06:45:19.889477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.000 [2024-11-20 06:45:19.889628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.000 [2024-11-20 06:45:19.889634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.000 [2024-11-20 06:45:19.889639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.000 [2024-11-20 06:45:19.889645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.000 [2024-11-20 06:45:19.901370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.000 [2024-11-20 06:45:19.901987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.000 [2024-11-20 06:45:19.902018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.000 [2024-11-20 06:45:19.902026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.000 [2024-11-20 06:45:19.902190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.000 [2024-11-20 06:45:19.902342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.000 [2024-11-20 06:45:19.902348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.000 [2024-11-20 06:45:19.902353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.000 [2024-11-20 06:45:19.902359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.262 [2024-11-20 06:45:19.914082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.262 [2024-11-20 06:45:19.914648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.262 [2024-11-20 06:45:19.914678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.262 [2024-11-20 06:45:19.914691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.262 [2024-11-20 06:45:19.914864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:19.915016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:19.915022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:19.915027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:19.915033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:19.926759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:19.927220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:19.927235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:19.927240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:19.927389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:19.927537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:19.927543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:19.927549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:19.927554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:19.939420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:19.939880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:19.939911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:19.939919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:19.940086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:19.940237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:19.940243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:19.940248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:19.940254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:19.952121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:19.952609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:19.952623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:19.952629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:19.952783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:19.952936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:19.952942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:19.952947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:19.952951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:19.964822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:19.965358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:19.965387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:19.965396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:19.965560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:19.965711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:19.965718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:19.965723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:19.965728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:19.977464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:19.978043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:19.978073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:19.978082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:19.978245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:19.978397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:19.978403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:19.978408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:19.978414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:19.990126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:19.990667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:19.990697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:19.990705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:19.990876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:19.991028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:19.991034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:19.991046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:19.991052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:20.003358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:20.003770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:20.003785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:20.003791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:20.003947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.263 [2024-11-20 06:45:20.004102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.263 [2024-11-20 06:45:20.004109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.263 [2024-11-20 06:45:20.004114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.263 [2024-11-20 06:45:20.004119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.263 [2024-11-20 06:45:20.016070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.263 [2024-11-20 06:45:20.016631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.263 [2024-11-20 06:45:20.016662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.263 [2024-11-20 06:45:20.016672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.263 [2024-11-20 06:45:20.016850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.017004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.017010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.017016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.017022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.028739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.029333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.029362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.029371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.029537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.029688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.029695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.029701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.029707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.041428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.041913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.041943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.041952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.042119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.042270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.042276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.042281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.042287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.054057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.054558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.054572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.054578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.054727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.054882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.054888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.054894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.054899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.066764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.067270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.067276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.067428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.067578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.067584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.067589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.067594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.079473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.080052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.080082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.080095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.080260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.080412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.080418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.080424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.080429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.092141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.092642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.092657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.092663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.092816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.092966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.092972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.092978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.092983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.104751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.105256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.105268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.105274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.105422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.105571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.105577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.105582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.105587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.117424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.264 [2024-11-20 06:45:20.118003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.264 [2024-11-20 06:45:20.118033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.264 [2024-11-20 06:45:20.118042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.264 [2024-11-20 06:45:20.118207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.264 [2024-11-20 06:45:20.118363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.264 [2024-11-20 06:45:20.118369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.264 [2024-11-20 06:45:20.118375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.264 [2024-11-20 06:45:20.118380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.264 [2024-11-20 06:45:20.130117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.265 [2024-11-20 06:45:20.130700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.265 [2024-11-20 06:45:20.130729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.265 [2024-11-20 06:45:20.130738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.265 [2024-11-20 06:45:20.130911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.265 [2024-11-20 06:45:20.131064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.265 [2024-11-20 06:45:20.131070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.265 [2024-11-20 06:45:20.131076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.265 [2024-11-20 06:45:20.131081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.265 [2024-11-20 06:45:20.142821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.265 [2024-11-20 06:45:20.143441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.265 [2024-11-20 06:45:20.143472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.265 [2024-11-20 06:45:20.143480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.265 [2024-11-20 06:45:20.143647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.265 [2024-11-20 06:45:20.143806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.265 [2024-11-20 06:45:20.143814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.265 [2024-11-20 06:45:20.143819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.265 [2024-11-20 06:45:20.143825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.265 [2024-11-20 06:45:20.155411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.265 [2024-11-20 06:45:20.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.265 [2024-11-20 06:45:20.156078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.265 [2024-11-20 06:45:20.156087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.265 [2024-11-20 06:45:20.156251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.265 [2024-11-20 06:45:20.156403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.265 [2024-11-20 06:45:20.156409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.265 [2024-11-20 06:45:20.156419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.265 [2024-11-20 06:45:20.156426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.265 [2024-11-20 06:45:20.168021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.265 [2024-11-20 06:45:20.168565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.265 [2024-11-20 06:45:20.168596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.265 [2024-11-20 06:45:20.168604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.265 [2024-11-20 06:45:20.168779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.265 [2024-11-20 06:45:20.168932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.265 [2024-11-20 06:45:20.168939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.265 [2024-11-20 06:45:20.168945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.265 [2024-11-20 06:45:20.168951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.527 [2024-11-20 06:45:20.180686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.527 [2024-11-20 06:45:20.181198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.527 [2024-11-20 06:45:20.181213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.527 [2024-11-20 06:45:20.181219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.527 [2024-11-20 06:45:20.181369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.527 [2024-11-20 06:45:20.181517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.527 [2024-11-20 06:45:20.181523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.527 [2024-11-20 06:45:20.181528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.181534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.193390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.193961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.193991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.194000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.194168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.194319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.194326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.194331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.194337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.206070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.206553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.206583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.206592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.206763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.206915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.206922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.206927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.206933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.218673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.219185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.219200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.219206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.219355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.219503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.219509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.219514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.219519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.231283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.231910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.231940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.231949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.232113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.232265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.232271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.232276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.232282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.243883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.244453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.244483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.244495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.244659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.244816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.244823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.244828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.244834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.256561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.257123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.257153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.257162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.257325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.257477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.257483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.257488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.257494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.269222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.269793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.269823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.269831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.270001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.270154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.270160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.270166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.270172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.281915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.528 [2024-11-20 06:45:20.282494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.528 [2024-11-20 06:45:20.282524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.528 [2024-11-20 06:45:20.282533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.528 [2024-11-20 06:45:20.282699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.528 [2024-11-20 06:45:20.282861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.528 [2024-11-20 06:45:20.282868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.528 [2024-11-20 06:45:20.282874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.528 [2024-11-20 06:45:20.282879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.528 [2024-11-20 06:45:20.294599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.295197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.295227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.295236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.295400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.295551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.295557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.295563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.295568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.307295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.307755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.307769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.307775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.307924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.308072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.308078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.308083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.308088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.319951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.320437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.320450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.320456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.320604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.320759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.320765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.320774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.320779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.332647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.333123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.333136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.333142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.333290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.333438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.333444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.333449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.333454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.345316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.345857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.345888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.345896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.346063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.346219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.346226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.346231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.346237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.357958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.358538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.358568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.358576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.358740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.358904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.358911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.358917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.358922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.370640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.371292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.371322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.371331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.371494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.371646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.371652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.371657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.371663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.383262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.383854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.383885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.383893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.384059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.384211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.529 [2024-11-20 06:45:20.384217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.529 [2024-11-20 06:45:20.384222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.529 [2024-11-20 06:45:20.384227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.529 [2024-11-20 06:45:20.395954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.529 [2024-11-20 06:45:20.396549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.529 [2024-11-20 06:45:20.396579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.529 [2024-11-20 06:45:20.396587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.529 [2024-11-20 06:45:20.396758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.529 [2024-11-20 06:45:20.396911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.530 [2024-11-20 06:45:20.396916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.530 [2024-11-20 06:45:20.396922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.530 [2024-11-20 06:45:20.396928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.530 [2024-11-20 06:45:20.408540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.530 [2024-11-20 06:45:20.408998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.530 [2024-11-20 06:45:20.409028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.530 [2024-11-20 06:45:20.409040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.530 [2024-11-20 06:45:20.409207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.530 [2024-11-20 06:45:20.409358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.530 [2024-11-20 06:45:20.409364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.530 [2024-11-20 06:45:20.409369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.530 [2024-11-20 06:45:20.409375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.530 [2024-11-20 06:45:20.421244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.530 [2024-11-20 06:45:20.421755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.530 [2024-11-20 06:45:20.421785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.530 [2024-11-20 06:45:20.421793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.530 [2024-11-20 06:45:20.421957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.530 [2024-11-20 06:45:20.422108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.530 [2024-11-20 06:45:20.422115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.530 [2024-11-20 06:45:20.422120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.530 [2024-11-20 06:45:20.422125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.530 [2024-11-20 06:45:20.433857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.530 [2024-11-20 06:45:20.434426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.530 [2024-11-20 06:45:20.434456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.530 [2024-11-20 06:45:20.434465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.530 [2024-11-20 06:45:20.434629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.530 [2024-11-20 06:45:20.434787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.530 [2024-11-20 06:45:20.434794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.530 [2024-11-20 06:45:20.434800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.530 [2024-11-20 06:45:20.434805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.791 [2024-11-20 06:45:20.446543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.791 [2024-11-20 06:45:20.447022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.791 [2024-11-20 06:45:20.447052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.791 [2024-11-20 06:45:20.447061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.791 [2024-11-20 06:45:20.447225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.791 [2024-11-20 06:45:20.447380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.791 [2024-11-20 06:45:20.447387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.791 [2024-11-20 06:45:20.447392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.791 [2024-11-20 06:45:20.447398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.791 [2024-11-20 06:45:20.459123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.791 [2024-11-20 06:45:20.459677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.791 [2024-11-20 06:45:20.459706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.791 [2024-11-20 06:45:20.459715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.791 [2024-11-20 06:45:20.459886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.791 [2024-11-20 06:45:20.460042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.791 [2024-11-20 06:45:20.460050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.460055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.460061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 [2024-11-20 06:45:20.471784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.472244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.472275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.472283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.472447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.472598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.472604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.472610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.472615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2898062 Killed "${NVMF_APP[@]}" "$@" 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.792 [2024-11-20 06:45:20.484497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.485064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.485094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.485106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.485270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.485422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.485428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.485433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.485439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2899797 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2899797 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2899797 ']' 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:00.792 06:45:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:00.792 [2024-11-20 06:45:20.497184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.497792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.497823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.497833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.497999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.498151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.498158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.498164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.498170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 [2024-11-20 06:45:20.509760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.510223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.510251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.510260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.510424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.510576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.510586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.510592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.510597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 [2024-11-20 06:45:20.522336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.522780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.522795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.522801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.522950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.523099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.523104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.523110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.523115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 [2024-11-20 06:45:20.534986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.535481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.535494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.535499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.535648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.535807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.535814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.535820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.535825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 [2024-11-20 06:45:20.547692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.548221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.548234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.548240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.548389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.548538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.548543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.548549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.548553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 [2024-11-20 06:45:20.549411] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:00.792 [2024-11-20 06:45:20.549456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.792 [2024-11-20 06:45:20.560281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.560856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.560887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.560896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.561063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.561215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.561221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.561226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.561232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.792 [2024-11-20 06:45:20.572966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.792 [2024-11-20 06:45:20.573547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.792 [2024-11-20 06:45:20.573576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.792 [2024-11-20 06:45:20.573585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.792 [2024-11-20 06:45:20.573758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.792 [2024-11-20 06:45:20.573910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.792 [2024-11-20 06:45:20.573916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.792 [2024-11-20 06:45:20.573923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.792 [2024-11-20 06:45:20.573928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.585615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.586097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.586113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.586119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.586268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.586417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.586422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.586428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.586439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.598306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.598765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.598779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.598785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.598935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.599083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.599088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.599094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.599099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.610959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.611556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.611586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.611595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.611766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.611919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.611925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.611931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.611936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.623666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.624228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.624257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.624266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.624430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.624582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.624588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.624594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.624600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.636332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.636856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.636886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.636895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.637061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.637213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.637219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.637225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.637231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.642952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:00.793 [2024-11-20 06:45:20.648974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.649570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.649601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.649610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.649781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.649934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.649940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.649945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.649952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.661689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.662172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.662188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.662194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.662343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.662492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.662498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.662503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.662508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.671897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:00.793 [2024-11-20 06:45:20.671918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:00.793 [2024-11-20 06:45:20.671925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:00.793 [2024-11-20 06:45:20.671933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:00.793 [2024-11-20 06:45:20.671939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:00.793 [2024-11-20 06:45:20.673151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:00.793 [2024-11-20 06:45:20.673279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:00.793 [2024-11-20 06:45:20.673280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.793 [2024-11-20 06:45:20.674384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.675042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.675073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.675082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.675247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.675399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.675405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.675411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.675417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.793 [2024-11-20 06:45:20.687021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.793 [2024-11-20 06:45:20.687652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.793 [2024-11-20 06:45:20.687682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.793 [2024-11-20 06:45:20.687691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.793 [2024-11-20 06:45:20.687867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.793 [2024-11-20 06:45:20.688019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.793 [2024-11-20 06:45:20.688026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.793 [2024-11-20 06:45:20.688032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.793 [2024-11-20 06:45:20.688038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:00.794 [2024-11-20 06:45:20.699632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:00.794 [2024-11-20 06:45:20.700315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:00.794 [2024-11-20 06:45:20.700346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:00.794 [2024-11-20 06:45:20.700355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:00.794 [2024-11-20 06:45:20.700520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:00.794 [2024-11-20 06:45:20.700672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:00.794 [2024-11-20 06:45:20.700678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:00.794 [2024-11-20 06:45:20.700684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:00.794 [2024-11-20 06:45:20.700695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.055 [2024-11-20 06:45:20.712291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.055 [2024-11-20 06:45:20.712702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.055 [2024-11-20 06:45:20.712732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.055 [2024-11-20 06:45:20.712741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.055 [2024-11-20 06:45:20.712913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.055 [2024-11-20 06:45:20.713065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.055 [2024-11-20 06:45:20.713071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.055 [2024-11-20 06:45:20.713077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.055 [2024-11-20 06:45:20.713083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.055 [2024-11-20 06:45:20.724953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.055 [2024-11-20 06:45:20.725441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.725471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.725480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.725645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.725802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.725809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.725814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.725820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.737589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.738189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.738220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.738229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.738393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.738544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.738551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.738556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.738562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.750312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.750847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.750877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.750886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.751053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.751204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.751210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.751216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.751222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.762953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.763398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.763428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.763437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.763602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.763759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.763766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.763771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.763777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.775645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.776118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.776133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.776139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.776288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.776448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.776455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.776460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.776466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.788334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.788983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.789013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.789022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.789190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.789343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.789349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.789355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.789361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.801008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.801581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.801611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.801620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.801791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.801944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.801950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.801955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.801961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.813687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.814048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.814066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.814075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.814225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.814373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.814379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.814384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.814389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.826397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.827036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.827067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.827076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.827240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.056 [2024-11-20 06:45:20.827392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.056 [2024-11-20 06:45:20.827401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.056 [2024-11-20 06:45:20.827407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.056 [2024-11-20 06:45:20.827413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.056 [2024-11-20 06:45:20.839013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.056 [2024-11-20 06:45:20.839521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.056 [2024-11-20 06:45:20.839535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.056 [2024-11-20 06:45:20.839541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.056 [2024-11-20 06:45:20.839690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.839851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.839857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.839862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.839867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.851729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.852178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.852207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.852215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.852382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.852534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.852540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.852545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.852551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 4640.83 IOPS, 18.13 MiB/s [2024-11-20T05:45:20.977Z] [2024-11-20 06:45:20.865672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.866290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.866320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.866329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.866493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.866645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.866652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.866657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.866666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.878262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.878725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.878740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.878750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.878900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.879049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.879055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.879060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.879065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.890930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.891388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.891401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.891406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.891554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.891702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.891707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.891712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.891717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.903581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.904127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.904140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.904145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.904293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.904446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.904453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.904458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.904462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.916183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.916649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.916661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.916666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.916818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.916967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.916972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.916977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.916982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.928834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.929188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.929201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.929206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.929354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.929502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.929507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.929512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.929517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.941529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.941986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.942016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.942025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.942190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.942341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.942347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.942353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.942359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.954237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.954852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.954882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.954895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.057 [2024-11-20 06:45:20.955061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.057 [2024-11-20 06:45:20.955213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.057 [2024-11-20 06:45:20.955219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.057 [2024-11-20 06:45:20.955224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.057 [2024-11-20 06:45:20.955229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.057 [2024-11-20 06:45:20.966827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.057 [2024-11-20 06:45:20.967313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.057 [2024-11-20 06:45:20.967328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.057 [2024-11-20 06:45:20.967334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.058 [2024-11-20 06:45:20.967483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.058 [2024-11-20 06:45:20.967631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.058 [2024-11-20 06:45:20.967636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.058 [2024-11-20 06:45:20.967642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.058 [2024-11-20 06:45:20.967647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.320 [2024-11-20 06:45:20.979524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.320 [2024-11-20 06:45:20.980037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.320 [2024-11-20 06:45:20.980051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.320 [2024-11-20 06:45:20.980057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.320 [2024-11-20 06:45:20.980205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.320 [2024-11-20 06:45:20.980354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.320 [2024-11-20 06:45:20.980359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.320 [2024-11-20 06:45:20.980364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.320 [2024-11-20 06:45:20.980369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.320 [2024-11-20 06:45:20.992241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.320 [2024-11-20 06:45:20.992775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:20.992805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:20.992813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:20.992980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:20.993131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:20.993142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:20.993148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:20.993154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.004891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.005238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.005252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.005258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.005407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.005556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.005562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.005567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.005572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.017568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.018136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.018166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.018175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.018340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.018491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.018497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.018502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.018508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.030239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.030692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.030707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.030713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.030867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.031016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.031022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.031028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.031036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.042920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.043345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.043375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.043384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.043548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.043700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.043706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.043712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.043718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.055514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.056111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.056142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.056150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.056317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.056468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.056474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.056480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.056486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.068098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.068742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.068780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.068789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.068953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.069104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.069110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.069115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.069121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.080723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.081268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.081283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.081288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.081438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.081586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.081592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.081597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.081602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.093330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.093635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.093648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.093653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.093806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.093955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.321 [2024-11-20 06:45:21.093960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.321 [2024-11-20 06:45:21.093965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.321 [2024-11-20 06:45:21.093973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.321 [2024-11-20 06:45:21.105991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.321 [2024-11-20 06:45:21.106468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.321 [2024-11-20 06:45:21.106502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.321 [2024-11-20 06:45:21.106512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.321 [2024-11-20 06:45:21.106678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.321 [2024-11-20 06:45:21.106834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.106841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.106847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.106852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.118580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.118911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.118926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.118935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.119085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.119233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.119239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.119244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.119249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.131263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.131858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.131871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.131877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.132025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.132174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.132180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.132185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.132190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.143926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.144361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.144373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.144379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.144527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.144675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.144681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.144686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.144691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.156550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.157137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.157167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.157176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.157341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.157492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.157503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.157508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.157515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.169254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.169772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.169788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.169793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.169942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.170091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.170097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.170102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.170106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.181858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.182095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.182108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.182113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.182262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.182411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.182418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.182423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.182428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.194440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.194992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.195005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.195010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.195159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.195308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.195313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.195318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.195326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.207054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.207509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.207522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.207527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.207676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.207828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.207835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.207840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.207844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.219725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.220185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.220199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.220204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.322 [2024-11-20 06:45:21.220353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.322 [2024-11-20 06:45:21.220501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.322 [2024-11-20 06:45:21.220507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.322 [2024-11-20 06:45:21.220513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.322 [2024-11-20 06:45:21.220518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.322 [2024-11-20 06:45:21.232385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.322 [2024-11-20 06:45:21.232738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.322 [2024-11-20 06:45:21.232756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.322 [2024-11-20 06:45:21.232761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.323 [2024-11-20 06:45:21.232910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.323 [2024-11-20 06:45:21.233060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.323 [2024-11-20 06:45:21.233066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.323 [2024-11-20 06:45:21.233071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.323 [2024-11-20 06:45:21.233076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.585 [2024-11-20 06:45:21.245125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.585 [2024-11-20 06:45:21.245474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.585 [2024-11-20 06:45:21.245488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.585 [2024-11-20 06:45:21.245494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.585 [2024-11-20 06:45:21.245643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.585 [2024-11-20 06:45:21.245796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.585 [2024-11-20 06:45:21.245802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.585 [2024-11-20 06:45:21.245808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.585 [2024-11-20 06:45:21.245813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.585 [2024-11-20 06:45:21.257814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.585 [2024-11-20 06:45:21.258368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.585 [2024-11-20 06:45:21.258398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.585 [2024-11-20 06:45:21.258407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.585 [2024-11-20 06:45:21.258571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.585 [2024-11-20 06:45:21.258723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.585 [2024-11-20 06:45:21.258729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.585 [2024-11-20 06:45:21.258735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.585 [2024-11-20 06:45:21.258740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.585 [2024-11-20 06:45:21.270473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.585 [2024-11-20 06:45:21.271079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.585 [2024-11-20 06:45:21.271109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.585 [2024-11-20 06:45:21.271119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.585 [2024-11-20 06:45:21.271283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.585 [2024-11-20 06:45:21.271435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.585 [2024-11-20 06:45:21.271442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.585 [2024-11-20 06:45:21.271448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.585 [2024-11-20 06:45:21.271454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.585 [2024-11-20 06:45:21.283055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.585 [2024-11-20 06:45:21.283552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.585 [2024-11-20 06:45:21.283567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.585 [2024-11-20 06:45:21.283576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.585 [2024-11-20 06:45:21.283725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.585 [2024-11-20 06:45:21.283878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.283886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.283892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.283897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 [2024-11-20 06:45:21.295759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.296221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.296234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.296240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.296388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.296536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.296542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.296547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.296552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 [2024-11-20 06:45:21.308419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.308885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.308916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.308925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.309089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.309241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.309247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.309252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.309258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 [2024-11-20 06:45:21.320992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.321353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.321368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.321373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.321522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.321670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.321681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.321687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.321692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 [2024-11-20 06:45:21.333698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.334262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.334292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.334301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.334466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.334617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.334623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.334629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.334634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.586 [2024-11-20 06:45:21.346358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.346713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.346728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.346733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.346886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.347035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.347041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.347046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.347051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 [2024-11-20 06:45:21.359040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.359590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.359620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.359629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.359799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.359956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.359963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.359968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.359974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 [2024-11-20 06:45:21.371695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.372298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.372329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.372338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.372502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.372653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.372660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.372665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.372671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 [2024-11-20 06:45:21.384281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.586 [2024-11-20 06:45:21.384603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.586 [2024-11-20 06:45:21.384618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.586 [2024-11-20 06:45:21.384624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.586 [2024-11-20 06:45:21.384776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.586 [2024-11-20 06:45:21.384925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.586 [2024-11-20 06:45:21.384931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.586 [2024-11-20 06:45:21.384936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.586 [2024-11-20 06:45:21.384941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.586 [2024-11-20 06:45:21.390460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.586 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.587 [2024-11-20 06:45:21.396939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.587 [2024-11-20 06:45:21.397399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.587 [2024-11-20 06:45:21.397413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.587 [2024-11-20 06:45:21.397418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.587 [2024-11-20 06:45:21.397567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.587 [2024-11-20 06:45:21.397715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.587 [2024-11-20 06:45:21.397721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.587 [2024-11-20 06:45:21.397726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.587 [2024-11-20 06:45:21.397730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.587 [2024-11-20 06:45:21.409616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.587 [2024-11-20 06:45:21.410144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.587 [2024-11-20 06:45:21.410158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.587 [2024-11-20 06:45:21.410164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.587 [2024-11-20 06:45:21.410313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.587 [2024-11-20 06:45:21.410461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.587 [2024-11-20 06:45:21.410467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.587 [2024-11-20 06:45:21.410472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.587 [2024-11-20 06:45:21.410477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.587 [2024-11-20 06:45:21.422326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.587 [2024-11-20 06:45:21.422797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.587 [2024-11-20 06:45:21.422817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.587 [2024-11-20 06:45:21.422822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.587 [2024-11-20 06:45:21.422976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.587 [2024-11-20 06:45:21.423125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.587 [2024-11-20 06:45:21.423131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.587 [2024-11-20 06:45:21.423136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.587 [2024-11-20 06:45:21.423141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.587 Malloc0 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.587 [2024-11-20 06:45:21.434999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.587 [2024-11-20 06:45:21.435483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.587 [2024-11-20 06:45:21.435513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.587 [2024-11-20 06:45:21.435521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.587 [2024-11-20 06:45:21.435685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.587 [2024-11-20 06:45:21.435843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.587 [2024-11-20 06:45:21.435850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.587 [2024-11-20 06:45:21.435856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.587 [2024-11-20 06:45:21.435862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.587 [2024-11-20 06:45:21.447587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.587 [2024-11-20 06:45:21.448072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.587 [2024-11-20 06:45:21.448088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d24280 with addr=10.0.0.2, port=4420 00:34:01.587 [2024-11-20 06:45:21.448093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d24280 is same with the state(6) to be set 00:34:01.587 [2024-11-20 06:45:21.448242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d24280 (9): Bad file descriptor 00:34:01.587 [2024-11-20 06:45:21.448391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:01.587 [2024-11-20 06:45:21.448397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:01.587 [2024-11-20 06:45:21.448402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:01.587 [2024-11-20 06:45:21.448407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:01.587 [2024-11-20 06:45:21.458557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.587 [2024-11-20 06:45:21.460257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.587 06:45:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2898712 00:34:01.849 [2024-11-20 06:45:21.537652] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:34:03.052 4603.71 IOPS, 17.98 MiB/s [2024-11-20T05:45:23.914Z] 5646.62 IOPS, 22.06 MiB/s [2024-11-20T05:45:25.298Z] 6446.78 IOPS, 25.18 MiB/s [2024-11-20T05:45:26.238Z] 7097.30 IOPS, 27.72 MiB/s [2024-11-20T05:45:27.179Z] 7616.00 IOPS, 29.75 MiB/s [2024-11-20T05:45:28.121Z] 8056.50 IOPS, 31.47 MiB/s [2024-11-20T05:45:29.061Z] 8444.92 IOPS, 32.99 MiB/s [2024-11-20T05:45:30.002Z] 8758.71 IOPS, 34.21 MiB/s [2024-11-20T05:45:30.002Z] 9041.27 IOPS, 35.32 MiB/s 00:34:10.082 Latency(us) 00:34:10.082 [2024-11-20T05:45:30.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.082 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:10.082 Verification LBA range: start 0x0 length 0x4000 00:34:10.082 Nvme1n1 : 15.01 9043.59 35.33 13453.75 0.00 5670.78 546.13 16820.91 00:34:10.082 [2024-11-20T05:45:30.002Z] =================================================================================================================== 00:34:10.082 [2024-11-20T05:45:30.002Z] Total : 9043.59 35.33 13453.75 0.00 5670.78 546.13 16820.91 00:34:10.082 06:45:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:10.082 06:45:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.082 06:45:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.082 06:45:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.343 rmmod nvme_tcp 00:34:10.343 rmmod nvme_fabrics 00:34:10.343 rmmod nvme_keyring 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2899797 ']' 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2899797 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 2899797 ']' 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 2899797 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2899797 00:34:10.343 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2899797' 00:34:10.344 killing process with pid 2899797 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 2899797 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 2899797 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.344 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.604 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.604 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.604 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.604 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.604 06:45:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.515 00:34:12.515 real 0m28.477s 00:34:12.515 user 1m3.420s 00:34:12.515 sys 0m7.836s 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:12.515 ************************************ 00:34:12.515 END TEST nvmf_bdevperf 00:34:12.515 ************************************ 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.515 ************************************ 00:34:12.515 START TEST nvmf_target_disconnect 00:34:12.515 ************************************ 00:34:12.515 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:12.775 * Looking for test storage... 00:34:12.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:12.775 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:12.775 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.776 --rc genhtml_branch_coverage=1 00:34:12.776 --rc genhtml_function_coverage=1 00:34:12.776 --rc genhtml_legend=1 00:34:12.776 --rc geninfo_all_blocks=1 00:34:12.776 --rc geninfo_unexecuted_blocks=1 00:34:12.776 00:34:12.776 ' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.776 --rc genhtml_branch_coverage=1 00:34:12.776 --rc genhtml_function_coverage=1 00:34:12.776 --rc genhtml_legend=1 00:34:12.776 --rc geninfo_all_blocks=1 00:34:12.776 --rc geninfo_unexecuted_blocks=1 00:34:12.776 00:34:12.776 ' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.776 --rc genhtml_branch_coverage=1 00:34:12.776 --rc genhtml_function_coverage=1 00:34:12.776 --rc genhtml_legend=1 00:34:12.776 --rc geninfo_all_blocks=1 00:34:12.776 --rc geninfo_unexecuted_blocks=1 00:34:12.776 00:34:12.776 ' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:12.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.776 --rc genhtml_branch_coverage=1 00:34:12.776 --rc genhtml_function_coverage=1 00:34:12.776 --rc genhtml_legend=1 00:34:12.776 --rc geninfo_all_blocks=1 00:34:12.776 --rc geninfo_unexecuted_blocks=1 00:34:12.776 00:34:12.776 ' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:12.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:12.776 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.777 06:45:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:20.915 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:20.915 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:20.915 Found net devices under 0000:31:00.0: cvl_0_0 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:20.915 Found net devices under 0000:31:00.1: cvl_0_1 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.915 06:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.915 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.915 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.915 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.915 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.915 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.915 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:34:20.916 00:34:20.916 --- 10.0.0.2 ping statistics --- 00:34:20.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.916 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:34:20.916 00:34:20.916 --- 10.0.0.1 ping statistics --- 00:34:20.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.916 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:20.916 ************************************ 00:34:20.916 START TEST nvmf_target_disconnect_tc1 00:34:20.916 ************************************ 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.916 [2024-11-20 06:45:40.448226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.916 [2024-11-20 06:45:40.448316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1615f60 with addr=10.0.0.2, port=4420 00:34:20.916 [2024-11-20 06:45:40.448350] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:20.916 [2024-11-20 06:45:40.448370] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:20.916 [2024-11-20 06:45:40.448378] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:20.916 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:20.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:20.916 Initializing NVMe Controllers 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:20.916 00:34:20.916 real 0m0.142s 00:34:20.916 user 0m0.058s 00:34:20.916 sys 0m0.083s 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:20.916 ************************************ 00:34:20.916 END TEST nvmf_target_disconnect_tc1 00:34:20.916 ************************************ 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:20.916 ************************************ 00:34:20.916 START TEST nvmf_target_disconnect_tc2 00:34:20.916 ************************************ 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2905872 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2905872 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2905872 ']' 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:20.916 06:45:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:20.916 [2024-11-20 06:45:40.609506] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:20.916 [2024-11-20 06:45:40.609564] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:20.916 [2024-11-20 06:45:40.709409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:20.916 [2024-11-20 06:45:40.763515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:20.916 [2024-11-20 06:45:40.763567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:20.916 [2024-11-20 06:45:40.763575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:20.916 [2024-11-20 06:45:40.763582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:20.916 [2024-11-20 06:45:40.763589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:20.916 [2024-11-20 06:45:40.765642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:20.916 [2024-11-20 06:45:40.765815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:20.916 [2024-11-20 06:45:40.766027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:20.916 [2024-11-20 06:45:40.766029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.857 Malloc0 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.857 [2024-11-20 06:45:41.534011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.857 [2024-11-20 06:45:41.574436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.857 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:21.858 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.858 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2906223 00:34:21.858 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:21.858 06:45:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:23.776 06:45:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2905872 00:34:23.776 06:45:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Write completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Write completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Write completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Write completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Write completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Read completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.776 Write completed with error (sct=0, sc=8) 00:34:23.776 starting I/O failed 00:34:23.777 Write completed with error (sct=0, sc=8) 00:34:23.777 starting I/O failed 00:34:23.777 Read completed with error (sct=0, sc=8) 00:34:23.777 starting I/O failed 00:34:23.777 Read completed with error (sct=0, sc=8) 00:34:23.777 starting I/O failed 00:34:23.777 Write completed with error (sct=0, sc=8) 00:34:23.777 starting I/O failed 00:34:23.777 [2024-11-20 06:45:43.613974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:23.777 [2024-11-20 06:45:43.614403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.614426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.615081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.615137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.615413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.615425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.615614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.615624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.616087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.616143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.616293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.616303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.616537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.616545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.616881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.616889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.617185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.617193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.617415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.617423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.617759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.617769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.618136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.618145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.618463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.618473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.618821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.618830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.619207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.619217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.619557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.619566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.619819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.619829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.620141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.620150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.620502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.620514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.620830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.620839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.621175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.621184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.621501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.621510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.621838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.621849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.622185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.622194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.622466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.622476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.622686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.622695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.623019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.623028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.623340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.623349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.623657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.623666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.623810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.623822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.777 qpair failed and we were unable to recover it. 00:34:23.777 [2024-11-20 06:45:43.624184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.777 [2024-11-20 06:45:43.624192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.624512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.624521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.624877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.624885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.625220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.625227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.625556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.625563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.625896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.625904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.626258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.626266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.626586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.626595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.626996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.627003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.627422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.627430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.627759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.627767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.628071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.628079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.628471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.628479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.628789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.628797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.629091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.629098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.629429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.629437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.629755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.629763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.630088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.630096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.630403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.630411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.630654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.630663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.630787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.630795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.631109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.631116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.631303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.631310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.631665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.631674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.631835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.631842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.632174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.632181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.632505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.632513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.632861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.632869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.633195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.633397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.633409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.633763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.633772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.634214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.634221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.778 qpair failed and we were unable to recover it. 00:34:23.778 [2024-11-20 06:45:43.634512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.778 [2024-11-20 06:45:43.634520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.634832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.634839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.635156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.635164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.635498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.635506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.635723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.635731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.635955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.635963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.636285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.636292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.636590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.636598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.636937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.636945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.637148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.637157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.637464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.637471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.637789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.637797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.638101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.638109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.638423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.638431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.638731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.638740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.639088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.639097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.639389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.639398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.639734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.639742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.640060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.640067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.640441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.640450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.640784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.640793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.641111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.641120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.641435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.641442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.641755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.641763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.642055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.642064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.642379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.642386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.642598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.642605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.642913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.642920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.643248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.643255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.643571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.643579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.643892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.643900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.644224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.779 [2024-11-20 06:45:43.644232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.779 qpair failed and we were unable to recover it. 00:34:23.779 [2024-11-20 06:45:43.644420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.644427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.644754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.644764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.645085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.645093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.645377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.645384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.645582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.645590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.645881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.645889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.646209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.646216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.646543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.646552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.646944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.646953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.647267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.647274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.647665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.647673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.647998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.648005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.648328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.648335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.648651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.648658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.648972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.648982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.649189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.649199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.649503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.649510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.649833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.649844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.650043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.650054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.650376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.650384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.650604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.650612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.650835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.650843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.651181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.651189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.651515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.651522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.651831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.651839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.652161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.652168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.652493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.652500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.652810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.652818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.653159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.653166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.653524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.653533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.653863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.653871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.654190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.654197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.654396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.654403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.780 [2024-11-20 06:45:43.654720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.780 [2024-11-20 06:45:43.654729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.780 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.654836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.654844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.655136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.655144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.655331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.655341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.655663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.655671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.655991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.656041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.656354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.656362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.656667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.656675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.656878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.656887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.657187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.657195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.657545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.657554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.657941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.657949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.658246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.658254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.658581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.658588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.658938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.658947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.659286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.659293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.659622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.659630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.659996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.660003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.660320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.660328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.660649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.660656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.660860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.661197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.661206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.661540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.661548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.661723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.661733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.662068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.662077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.662481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.662488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.662809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.662817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.663134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.663144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.663466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.663473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.663800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.663808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.664205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.664214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.664539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.664546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.781 qpair failed and we were unable to recover it. 00:34:23.781 [2024-11-20 06:45:43.664824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.781 [2024-11-20 06:45:43.664832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.665236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.665244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.665553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.665562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.665884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.665893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.666213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.666223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.666530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.666538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.666895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.666903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.667222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.667230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.667427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.667436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.667716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.667724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.668043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.668051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.668299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.668306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.668626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.668636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.668967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.668976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.669167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.669175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.669505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.669513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.669836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.669845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.670166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.670174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.670498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.670506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.670823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.670830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.671152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.671159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.671492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.671499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.671805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.671815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.672147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.672154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.782 qpair failed and we were unable to recover it. 00:34:23.782 [2024-11-20 06:45:43.672458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.782 [2024-11-20 06:45:43.672466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.672796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.672807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.673141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.673150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.673358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.673366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.673569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.673576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.673944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.673952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.674273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.674280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.674600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.674610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.674835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.674845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.675122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.675130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.675344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.675351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.675635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.675643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.675972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.675982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.676301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.676308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.676638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.676646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.676979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.676987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.677309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.677317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.677688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.677696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.678008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.678015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.678331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.678339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.678654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.678663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.678963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.678973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.679298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.679306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.679607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.679616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.679929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.679938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.680251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.680260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.680478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.680488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.680818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.680825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.681165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.681172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.681492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.681500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.681823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.681832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.682184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.682192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.682501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.682508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.682833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.682841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.683148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.683156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.683503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.783 [2024-11-20 06:45:43.683510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.783 qpair failed and we were unable to recover it. 00:34:23.783 [2024-11-20 06:45:43.683699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.683707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.684044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.684052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.684265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.684273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.684663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.684670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.684948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.684956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.685284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.685291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.685617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.685625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.685815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.685825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.686177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.686188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.686508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.686517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.686835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.686843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.687181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.687189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.687400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.687409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.687596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.687605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.687881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.687889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.688201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.688208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:23.784 [2024-11-20 06:45:43.688531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.784 [2024-11-20 06:45:43.688538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:23.784 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.688862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.688874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.689200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.689209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.689518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.689525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.689844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.689853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.690193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.690201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.690449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.690457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.690679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.690687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.058 [2024-11-20 06:45:43.691022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.058 [2024-11-20 06:45:43.691029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.058 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.691238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.691245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.691516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.691523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.691847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.691855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.692184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.692201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.692523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.692530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.692944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.692954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.693261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.693268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.693591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.693599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.693926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.693934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.694258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.694265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.694582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.694589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.694815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.694823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.695167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.695174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.695509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.695517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.695847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.695855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.696261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.696268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.696587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.696594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.696921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.696928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.697257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.697265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.697594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.697605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.697833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.697840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.698174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.698182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.698501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.698508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.698843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.698851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.698980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.698988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.699308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.699315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.699641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.699649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.699970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.699979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.700303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.700312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.700523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.700530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.700857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.059 [2024-11-20 06:45:43.700865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.059 qpair failed and we were unable to recover it. 00:34:24.059 [2024-11-20 06:45:43.701183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.701190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.701512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.701519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.701880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.701889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.702212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.702220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.702538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.702545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.702870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.702878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.703204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.703212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.703535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.703543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.703864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.703871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.704190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.704198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.704519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.704528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.704735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.704744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.704963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.704970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.705318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.705326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.705620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.705628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.705921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.705930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.706256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.706265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.706583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.706591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.706917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.706925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.707299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.707306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.707529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.707536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.707867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.707876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.708192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.708200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.708517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.708524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.708858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.708866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.709200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.709208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.709529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.709537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.709869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.709876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.710199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.710207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.710558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.710569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.710886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.710894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.060 [2024-11-20 06:45:43.711212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.060 [2024-11-20 06:45:43.711220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.060 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.711540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.711548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.711761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.711768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.712093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.712102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.712461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.712468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.712769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.712777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.712982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.712989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.713290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.713298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.713646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.713653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.713956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.713964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.714293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.714301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.714632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.714640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.714995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.715002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.715309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.715317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.715643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.715650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.715976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.715984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.716306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.716313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.716623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.716631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.716829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.716838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.717151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.717158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.717455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.717463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.717687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.717696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.718004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.718012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.718234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.718241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.718564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.718573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.718794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.718806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.719142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.719149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.719471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.719478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.719800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.719808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.720132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.720139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.720466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.720473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.720782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.720790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.721094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.721101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.721422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.721430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.061 [2024-11-20 06:45:43.721742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.061 [2024-11-20 06:45:43.721756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.061 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.722083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.722091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.722412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.722419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.722739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.722752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.723069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.723077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.723389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.723397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.723756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.723764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.724078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.724085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.724422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.724432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.724754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.724764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.725061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.725070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.725392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.725401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.725758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.725768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.726088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.726095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.726401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.726409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.726731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.726738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.727048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.727055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.727415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.727423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.727722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.727729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.728087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.728095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.728409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.728416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.728724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.728731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.729040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.729048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.729355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.729363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.729684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.729691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.730033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.730041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.730373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.730380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.730715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.730723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.731065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.731075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.062 qpair failed and we were unable to recover it. 00:34:24.062 [2024-11-20 06:45:43.731410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.062 [2024-11-20 06:45:43.731418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.731737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.731744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.732073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.732081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.732412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.732421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.732741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.732754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.733092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.733100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.733359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.733368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.733703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.733712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.733925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.733934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.734263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.734271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.734576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.734584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.734927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.734935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.735265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.735273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.735625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.735634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.736042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.736050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.736355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.736362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.736681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.736689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.737018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.737025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.737335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.737343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.737718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.737726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.738053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.738062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.738262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.738270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.738602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.738611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.738788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.738796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.739102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.739109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.739465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.739472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.739715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.739722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.740036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.740044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.740363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.740371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.740695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.740702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.741094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.741104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.741461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.741469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.741800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.063 [2024-11-20 06:45:43.741807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.063 qpair failed and we were unable to recover it. 00:34:24.063 [2024-11-20 06:45:43.742142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.742151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.742463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.742471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.742798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.742807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.743124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.743131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.743459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.743466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.743816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.743823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.744150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.744157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.744486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.744493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.744809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.744817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.745155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.745162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.745469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.745476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.745802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.745810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.746152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.746159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.746496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.746504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.746712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.746720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.747000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.747009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.747382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.747390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.747701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.747719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.748038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.748046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.748366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.748373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.748693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.748700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.749017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.749025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.749355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.749363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.749559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.749567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.749819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.749827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.750069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.750076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.064 qpair failed and we were unable to recover it. 00:34:24.064 [2024-11-20 06:45:43.750370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.064 [2024-11-20 06:45:43.750377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.750712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.750720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.751052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.751060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.751380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.751388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.751592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.751599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.751876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.751884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.752212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.752220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.752539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.752548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.752768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.752777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.753098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.753105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.753436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.753444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.753651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.753659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.753909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.753921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.754216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.754224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.754564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.754572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.754755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.754763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.755129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.755136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.755469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.755477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.755812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.755820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.756147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.756154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.756478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.756485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.756774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.756782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.757107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.757114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.757443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.757451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.757823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.757831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.758038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.758046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.758367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.758374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.758705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.758713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.759033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.759043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.759213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.759222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.759541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.759548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.759844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.759852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.760044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.065 [2024-11-20 06:45:43.760053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.065 qpair failed and we were unable to recover it. 00:34:24.065 [2024-11-20 06:45:43.760391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.760399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.760708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.760715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.761063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.761071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.761407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.761414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.761757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.761766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.762095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.762102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.762434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.762445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.762763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.762770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.763071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.763079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.763416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.763423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.763742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.763754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.764067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.764074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.764399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.764407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.764727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.764735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.765072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.765081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.765407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.765416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.765730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.765738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.766155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.766164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.766476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.766483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.766799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.766808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.767061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.767068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.767397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.767405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.767735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.767742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.768016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.768024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.768362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.768369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.768701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.768709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.769040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.769048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.769230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.769238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.769579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.769586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.769889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.769897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.770232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.770239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.770638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.770646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.770902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.770910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.066 [2024-11-20 06:45:43.771222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.066 [2024-11-20 06:45:43.771230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.066 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.771554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.771563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.771888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.771896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.772220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.772228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.772550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.772557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.772888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.772899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.773226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.773234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.773531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.773539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.773743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.773758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.774040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.774048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.774385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.774392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.774705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.774713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.775037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.775046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.775374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.775382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.775683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.775693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.776018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.776026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.776353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.776706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.776715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.777062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.777070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.777375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.777383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.777760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.777768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.778053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.778061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.778388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.778396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.778722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.778729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.779061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.779069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.779387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.779395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.779735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.779743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.780066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.780073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.780381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.780389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.780699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.780707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.781024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.781033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.781356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.781365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.781688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.781696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.067 [2024-11-20 06:45:43.782008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.067 [2024-11-20 06:45:43.782017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.067 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.782353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.782362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.782683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.782692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.783018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.783026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.783347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.783356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.783678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.783687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.784007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.784016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.784337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.784346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.784527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.784537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.784832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.784840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.785183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.785190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.785515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.785522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.785891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.785899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.786119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.786126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.786303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.786311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.786626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.786635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.786918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.786926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.787260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.787268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.787606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.787613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.787924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.787932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.788262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.788269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.788576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.788584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.788913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.788921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.789140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.789147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.789487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.789494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.789813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.789821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.790154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.790162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.790488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.790496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.790901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.790909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.791229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.791237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.791553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.791560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.791772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.068 [2024-11-20 06:45:43.791780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.068 qpair failed and we were unable to recover it. 00:34:24.068 [2024-11-20 06:45:43.792139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.792148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.792476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.792483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.792791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.792799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.793106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.793113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.793435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.793442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.793663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.793671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.793998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.794006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.794338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.794345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.794544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.794553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.794827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.794835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.795020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.795028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.795380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.795389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.795733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.795740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.796134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.796141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.796414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.796422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.796749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.796756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.796961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.796969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.797196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.797205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.797549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.797556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.797863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.797872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.798194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.798201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.798540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.798547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.798876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.798884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.799215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.799223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.799399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.799407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.799761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.799769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.069 qpair failed and we were unable to recover it. 00:34:24.069 [2024-11-20 06:45:43.800083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.069 [2024-11-20 06:45:43.800091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.800479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.800486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.800695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.800705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.801007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.801016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.801355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.801364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.801738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.801751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.801942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.801951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.802231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.802238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.802527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.802535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.802864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.802872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.803182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.803191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.803516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.803523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.803798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.803807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.804102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.804110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.804447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.804457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.804669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.804677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.804959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.804967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.805300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.805307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.805628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.805635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.805922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.805930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.806259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.806268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.806588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.806598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.806919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.806929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.807253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.807261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.807571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.807578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.807909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.807917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.808243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.808250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.808617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.808624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.809015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.809024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.809343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.809350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.809669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.809677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.809885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.809893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.810165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.810173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.810529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.810536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.810859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.810867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.070 qpair failed and we were unable to recover it. 00:34:24.070 [2024-11-20 06:45:43.811242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.070 [2024-11-20 06:45:43.811249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.811557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.811565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.811939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.811947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.812256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.812264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.812591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.812598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.812908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.812916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.813251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.813258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.813525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.813534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.813846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.813854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.814188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.814195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.814519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.814527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.814846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.814854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.815122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.815132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.815464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.815474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.815797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.815805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.816153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.816467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.816475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.816796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.816806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.817149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.817157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.817463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.817470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.817811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.817819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.818137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.818146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.818458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.818466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.818795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.818804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.819126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.819140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.819458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.819467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.819790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.819798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.820100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.820108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.820434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.820442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.820757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.820766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.821043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.821052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.821372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.821381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.821709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.821718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.822043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.071 [2024-11-20 06:45:43.822054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.071 qpair failed and we were unable to recover it. 00:34:24.071 [2024-11-20 06:45:43.822376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.822384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.822710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.822719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.823128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.823137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.823454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.823463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.823789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.823800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.824097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.824105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.824426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.824435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.824764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.824773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.825166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.825174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.825488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.825496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.825817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.825827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.826160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.826167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.826478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.826487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.826843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.826852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.827162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.827170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.827493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.827501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.827908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.827918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.828271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.828278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.828604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.828614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.828931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.828938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.829263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.829271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.829596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.829603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.829922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.829930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.830258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.830266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.830594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.830603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.830956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.830965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.831289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.831297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.831627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.831635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.831968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.831976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.832302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.832310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.832633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.832641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.832930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.832940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.833267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.833274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.072 qpair failed and we were unable to recover it. 00:34:24.072 [2024-11-20 06:45:43.833598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.072 [2024-11-20 06:45:43.833605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.833911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.833920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.834250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.834257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.834562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.834569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.834894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.834901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.835221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.835229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.835551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.835559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.835797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.835806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.836138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.836146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.836360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.836367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.836566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.836575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.836777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.836787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.837140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.837148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.837545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.837553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.837758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.837767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.838106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.838114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.838443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.838450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.838771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.838779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.839121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.839128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.839450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.839458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.839674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.839682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.840003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.840011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.840339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.840347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.840651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.840658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.840987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.840995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.841186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.841196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.841490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.841497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.841828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.841836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.842197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.842204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.842383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.842391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.842723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.842731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.842942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.842951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.073 [2024-11-20 06:45:43.843224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.073 [2024-11-20 06:45:43.843231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.073 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.843565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.843573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.843900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.843907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.844229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.844237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.844466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.844475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.844797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.844805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.844998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.845005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.845287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.845294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.845585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.845592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.845927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.845935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.846270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.846277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.846580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.846587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.846783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.846793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.847118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.847125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.847436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.847443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.847730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.847737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.848046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.848055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.848370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.848377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.848695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.848704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.849017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.849025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.849348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.849356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.849678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.849685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.850009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.850017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.850323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.850330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.850638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.850645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.850972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.850980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.851189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.851197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.851538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.851545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.851854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.851862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.852189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.852198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.852526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.852534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.074 qpair failed and we were unable to recover it. 00:34:24.074 [2024-11-20 06:45:43.852858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.074 [2024-11-20 06:45:43.852865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.853194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.853201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.853522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.853529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.853758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.853768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.854121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.854129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.854311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.854318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.854611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.854619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.854866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.854874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.855212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.855220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.855542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.855550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.855871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.855879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.856211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.856219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.856541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.856548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.856887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.856896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.857227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.857235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.857425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.857432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.857765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.857773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.858128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.858136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.858456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.858463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.858795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.858803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.859129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.859136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.859445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.859452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.859785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.859793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.860115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.860123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.860447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.860456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.860777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.860786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.861046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.861053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.075 [2024-11-20 06:45:43.861387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.075 [2024-11-20 06:45:43.861395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.075 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.861726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.861734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.862143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.862151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.862479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.862488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.862820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.862828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.863162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.863169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.863573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.863582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.863896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.863903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.864226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.864234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.864556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.864564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.864898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.864906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.865240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.865248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.865558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.865566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.865881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.865889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.866299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.866308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.866632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.866640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.866968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.866977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.867300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.867308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.867629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.867637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.867983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.867991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.868312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.868321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.868644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.868652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.868983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.868992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.869167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.869175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.869491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.869498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.869722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.869730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.870073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.870081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.870396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.870403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.870721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.870729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.871091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.871099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.871490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.076 [2024-11-20 06:45:43.871499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.076 qpair failed and we were unable to recover it. 00:34:24.076 [2024-11-20 06:45:43.871835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.871843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.872053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.872061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.872435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.872443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.872761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.872770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.873009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.873017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.873188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.873195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.873514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.873522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.873818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.873827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.874159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.874167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.874506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.874514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.874843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.874852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.875161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.875168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.875378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.875386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.875574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.875586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.875868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.875877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.876219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.876227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.876554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.876563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.876894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.877218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.877225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.877537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.877545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.877875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.877884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.878081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.878089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.878409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.878417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.878792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.878801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.879143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.879151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.879476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.879485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.879810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.879819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.880161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.880487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.880495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.880816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.880826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.881203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.881211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.881603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.881610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.077 [2024-11-20 06:45:43.881865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.077 [2024-11-20 06:45:43.881874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.077 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.882160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.882168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.882478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.882487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.882726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.882733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.883051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.883060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.883380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.883387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.883716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.883724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.884089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.884098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.884397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.884410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.884636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.884644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.884966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.884975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.885310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.885318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.885541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.885549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.885840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.885848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.886155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.886163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.886459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.886469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.886800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.886807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.887093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.887101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.887418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.887427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.887760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.887769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.887890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.887898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.888127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.888134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.888473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.888481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.888803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.888812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.889144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.889154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.889487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.889495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.889899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.889909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.890238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.890246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.890556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.890563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.890892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.890900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.891231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.891239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.891563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.891571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.078 [2024-11-20 06:45:43.891908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.078 [2024-11-20 06:45:43.891916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.078 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.892248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.892256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.892479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.892488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.892795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.892803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.893134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.893143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.893368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.893376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.893704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.893712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.894049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.894058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.894377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.894385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.894712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.894720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.894942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.894951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.895224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.895232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.895540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.895549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.895871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.895879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.896194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.896202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.896549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.896558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.896866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.896874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.897192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.897203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.897499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.897507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.897813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.897822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.898152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.898161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.898428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.898436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.898762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.898772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.899083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.899093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.899409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.899419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.899742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.899755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.900087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.900097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.900413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.900420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.900809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.900817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.901154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.901162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.901484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.901492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.901816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.079 [2024-11-20 06:45:43.901826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.079 qpair failed and we were unable to recover it. 00:34:24.079 [2024-11-20 06:45:43.902156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.902166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.902418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.902427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.902755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.902763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.902953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.902963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.903176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.903184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.903369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.903377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.903775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.903785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.904112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.904122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.904426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.904433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.904774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.904783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.905129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.905138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.905344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.905353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.905642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.905653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.905973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.905983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.906312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.906323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.906641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.906651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.906982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.906990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.907311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.907319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.907599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.907607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.907912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.907921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.908264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.908273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.908567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.908576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.908897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.908904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.909135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.909143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.909474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.909482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.909808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.909816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.910096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.910104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.080 qpair failed and we were unable to recover it. 00:34:24.080 [2024-11-20 06:45:43.910425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.080 [2024-11-20 06:45:43.910435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.910606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.910618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.910928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.910939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.911146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.911154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.911520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.911529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.911878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.911887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.912223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.912231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.912610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.912619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.912918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.912927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.913229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.913238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.913435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.913444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.913650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.913659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.914030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.914039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.914255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.914264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.914584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.914592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.914825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.914835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.915050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.915060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.915288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.915296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.915591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.915601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.915829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.915838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.916091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.916100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.916430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.916439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.916639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.916648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.916914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.916925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.917033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.917043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.917204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.917213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.917620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.917632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.917879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.917888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.918001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.918009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.918126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.918134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.918383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.918392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.918630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.081 [2024-11-20 06:45:43.918639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.081 qpair failed and we were unable to recover it. 00:34:24.081 [2024-11-20 06:45:43.918957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.918968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.919337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.919631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.919638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.919986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.919995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.920307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.920316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.920649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.920660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.921063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.921072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.921470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.921480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.921810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.921818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.922121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.922130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.922461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.922470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.922870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.922879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.923225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.923235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.923570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.923578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.923900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.923910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.924234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.924243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.924442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.924449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.924808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.924818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.925154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.925164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.925392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.925402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.925728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.925736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.926114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.926122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.926420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.926428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.926752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.926761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.927124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.927131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.927450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.927459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.927789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.927797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.928110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.928118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.082 [2024-11-20 06:45:43.928442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.082 [2024-11-20 06:45:43.928450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.082 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.928783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.928793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.929124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.929133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.929453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.929462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.929785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.929795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.930098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.930106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.930450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.930460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.930654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.930664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.930972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.930980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.931302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.931310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.931630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.931639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.931982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.931990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.932370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.932380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.932709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.932717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.933040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.933051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.933408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.933418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.933755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.933766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.934169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.934178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.934509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.934516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.934829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.934838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.935028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.935038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.935247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.935256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.935536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.935544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.935855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.935863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.936187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.936194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.936520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.936529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.936899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.936907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.937233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.937243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.937565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.937573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.937868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.937877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.083 [2024-11-20 06:45:43.938245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.083 [2024-11-20 06:45:43.938253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.083 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.938454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.938462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.938736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.938744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.939082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.939092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.939399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.939412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.939721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.939730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.940050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.940059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.940383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.940391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.940720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.940730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.941092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.941101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.941425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.941434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.941756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.941767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.942056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.942063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.942369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.942378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.942706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.942713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.943025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.943034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.943355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.943363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.943689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.943698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.944018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.944026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.944343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.944351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.944685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.944695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.945030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.945039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.945353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.945360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.945671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.945681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.945994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.946004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.946345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.946355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.946681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.946691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.947025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.947034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.947360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.947367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.947695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.947705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.947934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.947943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.948276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.948284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.948485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.084 [2024-11-20 06:45:43.948493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.084 qpair failed and we were unable to recover it. 00:34:24.084 [2024-11-20 06:45:43.948801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.948811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.949043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.949052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.949388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.949396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.949716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.949724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.949973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.949983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.950326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.950335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.950665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.950673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.950972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.950980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.951302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.951309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.951633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.951641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.951965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.951976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.952299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.952309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.952622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.952634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.952953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.952963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.953283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.953291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.953615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.953623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.953935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.953945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.954281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.954290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.954618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.954627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.954928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.954937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.955104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.955114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.955463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.955836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.955843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.956055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.956065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.956315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.956323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.956518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.956526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.956886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.956894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.957298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.957308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.085 [2024-11-20 06:45:43.957630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.085 [2024-11-20 06:45:43.957639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.085 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.957839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.957848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.958194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.958202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.958549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.958557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.958875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.958883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.959208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.959217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.959456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.959466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.959788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.959797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.960013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.960021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.960257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.960264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.960604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.960611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.960925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.960935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.086 [2024-11-20 06:45:43.961128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.086 [2024-11-20 06:45:43.961136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.086 qpair failed and we were unable to recover it. 00:34:24.374 [2024-11-20 06:45:43.961507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.374 [2024-11-20 06:45:43.961517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.374 qpair failed and we were unable to recover it. 00:34:24.374 [2024-11-20 06:45:43.961827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.961837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.962188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.962196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.962379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.962387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.962721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.962728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.963065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.963073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.963400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.963407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.963651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.963659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.964020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.964028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.964360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.964370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.964703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.964712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.965031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.965040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.965365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.965374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.965699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.965708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.966021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.966029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.966414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.966423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.966756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.966765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.967101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.967108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.967308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.967316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.375 [2024-11-20 06:45:43.967569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.375 [2024-11-20 06:45:43.967577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.375 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.967915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.967922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.968137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.968144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.968488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.968496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.968836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.968844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.969175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.969182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.969505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.969513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.969815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.969824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.970189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.970196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.970505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.970513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.970838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.970846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.971148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.971156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.971477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.971484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.971808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.971816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.972147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.972154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.972475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.972491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.972845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.972853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.973184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.973191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.973512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.973519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.973822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.973830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.974166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.974175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.974467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.974475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.974800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.974807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.975125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.975132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.975459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.975467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.975796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.975805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.976037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.976044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.976370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.976379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.376 qpair failed and we were unable to recover it. 00:34:24.376 [2024-11-20 06:45:43.976772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.376 [2024-11-20 06:45:43.976779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.976974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.976983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.977268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.977275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.977597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.977605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.977923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.977930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.978250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.978257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.978547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.978554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.978856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.978865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.979161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.979168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.979493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.979500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.979824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.979832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.980138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.980146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.980463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.980470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.980801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.980809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.981129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.981138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.981459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.981468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.981670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.981679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.981962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.981970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.982345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.982354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.982680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.982690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.982999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.983007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.983329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.983336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.983661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.983668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.983999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.984007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.984324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.984332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.377 qpair failed and we were unable to recover it. 00:34:24.377 [2024-11-20 06:45:43.984656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.377 [2024-11-20 06:45:43.984664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.985005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.985012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.985301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.985309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.985513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.985520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.985808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.985816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.986166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.986173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.986494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.986502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.986824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.986833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.987159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.987167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.987552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.987561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.987883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.987891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.988209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.988217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.988536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.988544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.988869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.988877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.989077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.989085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.989518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.989525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.989735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.989743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.990048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.990055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.990387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.990395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.990713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.990720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.990910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.990917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.991293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.991301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.991636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.991644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.991972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.991980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.992301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.992309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.378 [2024-11-20 06:45:43.992633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.378 [2024-11-20 06:45:43.992640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.378 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.993014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.993023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.993381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.993388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.993712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.993720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.994013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.994020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.994298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.994305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.994665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.994672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.994998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.995006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.995341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.995349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.995673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.995682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.996007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.996017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.996368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.996377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.996704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.996711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.997034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.997042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.997365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.997373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.997693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.997701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.998018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.998025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.998329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.998337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.998526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.998534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.998838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.998845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.999196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.999204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.999527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.999535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:43.999857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:43.999865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:44.000206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:44.000213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.379 qpair failed and we were unable to recover it. 00:34:24.379 [2024-11-20 06:45:44.000531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.379 [2024-11-20 06:45:44.000538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.000856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.000863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.001170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.001178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.001496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.001504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.001713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.001722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.002107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.002115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.002422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.002430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.002673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.002680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.002965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.002973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.003308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.003316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.003661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.003669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.004043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.004051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.004440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.004449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.004752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.004760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.004964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.004972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.005337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.005345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.005683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.005691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.006023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.006031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.006372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.006380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.006704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.006712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.007037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.380 [2024-11-20 06:45:44.007045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.380 qpair failed and we were unable to recover it. 00:34:24.380 [2024-11-20 06:45:44.007272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.007280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.007628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.007637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.007877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.007885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.008201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.008209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.008568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.008577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.008783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.008792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.008986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.008995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.009327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.009334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.009642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.009650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.009977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.009985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.010299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.010317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.010659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.010666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.010957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.010965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.011341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.011348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.011638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.011647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.011962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.011970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.012321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.012329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.012654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.012662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.012995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.013003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.013225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.381 [2024-11-20 06:45:44.013232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.381 qpair failed and we were unable to recover it. 00:34:24.381 [2024-11-20 06:45:44.013573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.013580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.013907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.013914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.014243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.014251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.014575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.014584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.014808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.014817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.015178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.015185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.015502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.015509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.015825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.015833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.016158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.016166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.382 qpair failed and we were unable to recover it. 00:34:24.382 [2024-11-20 06:45:44.016483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.382 [2024-11-20 06:45:44.016490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.016816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.016823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.017018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.017025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.017359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.017366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.017722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.017734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.018062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.018070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.018380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.018388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.018723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.018731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.019053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.019061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.019387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.019394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.019614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.019622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.019972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.019980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.020334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.020342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.020669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.020676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.021006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.021014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.383 qpair failed and we were unable to recover it. 00:34:24.383 [2024-11-20 06:45:44.021180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.383 [2024-11-20 06:45:44.021190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.021519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.021526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.021736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.021747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.021965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.021973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.022246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.022253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.022477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.022484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.022687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.022695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.023034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.023042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.023257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.023265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.023576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.023583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.023920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.023927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.024257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.024264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.024576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.024584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.024815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.024822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.025140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.025147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.025477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.025486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.025809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.025818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.026144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.026151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.026450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.026457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.026782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.026789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.027121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.027129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.027487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.027495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.027707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.027715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.028025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.028034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.028323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.028330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.028534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.028542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.028889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.384 [2024-11-20 06:45:44.028897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.384 qpair failed and we were unable to recover it. 00:34:24.384 [2024-11-20 06:45:44.029215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.029223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.029434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.029442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.029660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.029667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.029962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.029970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.030156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.030164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.030567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.030574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.030801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.030809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.031180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.031187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.031504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.031511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.031834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.031841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.032174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.032181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.032483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.032490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.032800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.032808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.033006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.033013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.033355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.033362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.033663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.033672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.034006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.034014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.034341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.034349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.034675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.034683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.034887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.034896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.035168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.035176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.035502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.035511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.035864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.036187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.036194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.036517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.036524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.036839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.036847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.037167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.037174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.037483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.037491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.037813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.037821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.038040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.038047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.038224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.038235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.038576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.385 [2024-11-20 06:45:44.038583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.385 qpair failed and we were unable to recover it. 00:34:24.385 [2024-11-20 06:45:44.038883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.038892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.039174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.039181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.039486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.039494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.039831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.039839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.040131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.040138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.040462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.040469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.040804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.040812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.041144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.041153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.041475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.041482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.041791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.041799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.042135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.042143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.042471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.042478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.042802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.042811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.043133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.043141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.043467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.043475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.043808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.043816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.044178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.044185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.044507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.044514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.044753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.044761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.045102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.045110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.045431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.045439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.045760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.045768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.046087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.046094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.046416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.046423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.046821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.046830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.047133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.047140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.047470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.047477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.047799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.047808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.048059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.048066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.048430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.048438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.048760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.048769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.049086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.049093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.049416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.049424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.049762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.049770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.049965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.049973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.050191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.050199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.050375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.050384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.050595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.050602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.050940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.050948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.051286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.051295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.051602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.051610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.051785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.051802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.052092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.052099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.052427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.052435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.052732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.052740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.053121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.053128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.053429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.053437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.053762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.386 [2024-11-20 06:45:44.053771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.386 qpair failed and we were unable to recover it. 00:34:24.386 [2024-11-20 06:45:44.054077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.054084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.054405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.054413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.054735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.054743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.055065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.055072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.055408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.055415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.055732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.055740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.056112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.056119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.056342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.056349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.056684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.056691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.057015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.057023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.057339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.057346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.057522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.057530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.057886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.057894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.058070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.058079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.058389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.058396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.058609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.058617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.058973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.058981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.059262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.059270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.059604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.059614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.059916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.059924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.060260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.060267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.060570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.060578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.060770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.060778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.061071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.061079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.061409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.061416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.061768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.061775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.062103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.062110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.062431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.062439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.062752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.062761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.063050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.063059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.063377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.063386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.063710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.063717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.063998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.064006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.064305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.064312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.064640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.064648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.064977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.064984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.065290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.065297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.065665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.065672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.065997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.066005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.066331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.066338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.066670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.066678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.066848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.066863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.067170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.067178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.067380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.067387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.067673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.067681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.068019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.068027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.068338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.068346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.068426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.068433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.068712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.068720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.069003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.069011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.069235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.069243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.069569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.069576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.069857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.069865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.070198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.070205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.070524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.070531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.070854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.070861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.071189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.071197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.071523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.071532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.071851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.071858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.072174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.072184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.072503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.072510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.072834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.072842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.073170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.073179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.073500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.073509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.073814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.073822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.074147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.387 [2024-11-20 06:45:44.074155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.387 qpair failed and we were unable to recover it. 00:34:24.387 [2024-11-20 06:45:44.074477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.074484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.074809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.075133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.075140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.075361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.075368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.075595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.075602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.075968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.075977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.076301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.076308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.076707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.076716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.077053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.077061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.077385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.077393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.077715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.077722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.078017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.078025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.078351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.078358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.078673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.078691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.079078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.079086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.079400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.079408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.079742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.079753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.080085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.080093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.080417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.080424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.080822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.080831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.081070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.081080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.081463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.081470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.081646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.081654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.082013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.082022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.082343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.082351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.082579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.082586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.082928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.082936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.083273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.083280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.083615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.083622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.084027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.084037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.084353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.084360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.084682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.084689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.085017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.085025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.085333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.085340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.085558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.085567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.085892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.085899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.086216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.086224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.086583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.086590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.086902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.086910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.087098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.087105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.087410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.087418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.087757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.087764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.088138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.088146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.088461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.088469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.088678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.088687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.088878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.088885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.089255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.089263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.089587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.089594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.090054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.090062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.090375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.090382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.090607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.090616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.090962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.090970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.091288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.091296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.091623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.091630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.091939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.091947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.092256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.092264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.092576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.092584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.092900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.092908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.093112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.093120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.093479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.093486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.093814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.093823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.094143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.094152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.094471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.094478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.094694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.094702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.095007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.095015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.095338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.095346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.095668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.095677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.095866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.095876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.096211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.096220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.096558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.096567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.096910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.096918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.097225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.097233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.388 qpair failed and we were unable to recover it. 00:34:24.388 [2024-11-20 06:45:44.097560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.388 [2024-11-20 06:45:44.097567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.097773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.097781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.098107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.098114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.098452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.098460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.098776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.098784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.099115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.099123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.099439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.099446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.099769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.099777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.100103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.100110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.100433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.100441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.100761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.100770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.101072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.101081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.101416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.101425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.101779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.101786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.102113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.102120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.102431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.102438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.102738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.102753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.103084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.103092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.103400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.103408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.103731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.103738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.104134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.104143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.104486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.104493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.104800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.104808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.105155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.105162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.105485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.105493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.105813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.105821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.106143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.106151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.106478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.106485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.106783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.106791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.107163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.107170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.107473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.107481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.107797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.107805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.108149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.108157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.108481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.108488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.108824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.108832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.109109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.109116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.109442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.109449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.109796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.109805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.110104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.110112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.110294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.110302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.110526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.110534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.110872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.110880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.111198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.111205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.111534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.111541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.111755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.111763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.112114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.112121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.112337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.112345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.112711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.112719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.113047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.113055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.113381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.113388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.113707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.113715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.114078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.114085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.114287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.114294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.114575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.114582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.114927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.114935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.115267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.115275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.115597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.115605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.115988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.115999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.116323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.116331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.116544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.116552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.116921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.116929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.117257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.117264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.117594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.117601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.117926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.117935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.118262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.118271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.118577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.118585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.118892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.118900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.119227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.119236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.119555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.119562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.119892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.119900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.120216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.120225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.120616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.120626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.121021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.121031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.389 [2024-11-20 06:45:44.121339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.389 [2024-11-20 06:45:44.121348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.389 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.121678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.121686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.121995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.122003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.122321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.122329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.122651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.122659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.122947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.122957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.123280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.123287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.123606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.123614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.123923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.123933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.124270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.124279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.124589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.124598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.124922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.124931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.125250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.125260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.125584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.125594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.125917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.125925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.126326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.126335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.126646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.126654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.126974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.126983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.127310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.127318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.127652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.127661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.127866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.127875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.128202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.128209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.128603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.128613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.128923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.128931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.129239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.129247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.129575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.129584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.129899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.129907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.130207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.130215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.130546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.130555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.130926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.130934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.131328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.131345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.131668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.131675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.131994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.132003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.132324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.132332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.132657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.132665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.132831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.132841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.133167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.133175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.133491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.133498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.133582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.133591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.133882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.133892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.134230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.134239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.134522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.134531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.134848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.134857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.135161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.135170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.135538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.135856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.135864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.136195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.136204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.136516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.136524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.136832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.136841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.137145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.137154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.137471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.137479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.137690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.137697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.137984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.137995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.138331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.138338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.138560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.138569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.138904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.138913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.139290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.139297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.139618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.139625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.139828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.139836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.140059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.140066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.140399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.140407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.140756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.140764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.141045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.141053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.141381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.141390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.141704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.141712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.142048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.142056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.142388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.142714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.142722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.143048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.390 [2024-11-20 06:45:44.143058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.390 qpair failed and we were unable to recover it. 00:34:24.390 [2024-11-20 06:45:44.143388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.143396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.143720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.143728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.143938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.143946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.144216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.144224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.144641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.144650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.144818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.144828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.145137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.145145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.145442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.145451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.145770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.145780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.146171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.146180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.146515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.146523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.146851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.146859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.147177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.147185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.147494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.147502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.147828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.147837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.148159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.148168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.148485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.148495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.148805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.148813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.149136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.149145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.149467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.149475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.149795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.149804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.150008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.150016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.150210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.150220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.150519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.150527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.150855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.150867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.151182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.151191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.151513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.151522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.151818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.151828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.152166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.152173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.152406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.152415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.152735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.152744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.153123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.153133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.153486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.153495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.153806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.153816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.154118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.154128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.154456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.154465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.154787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.154796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.155205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.155215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.155594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.155604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.155908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.155917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.156282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.156290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.156587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.156596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.156915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.156924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.157248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.157255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.157575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.157583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.157888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.157897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.158274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.158283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.158496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.158504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.158844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.158853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.159062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.159071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.159299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.159308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.159637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.159646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.159823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.159832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.160144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.160152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.160477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.160487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.160821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.160829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.161058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.161068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.161427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.161436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.161730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.161740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.162032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.162041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.162356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.162364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.162678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.162686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.163043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.163054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.163383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.163390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.163709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.163717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.164129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.164137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.164462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.164470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.164760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.164768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.164982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.164991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.165329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.165336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.165668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.165677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.166009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.166019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.391 [2024-11-20 06:45:44.166341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.391 [2024-11-20 06:45:44.166350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.391 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.166680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.166688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.167011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.167019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.167335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.167344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.167654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.167664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.167995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.168003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.168280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.168289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.168504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.168519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.168838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.168847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.169168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.169178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.169439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.169449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.169772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.169781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.170104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.170113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.170436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.170445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.170765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.170774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.171117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.171125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.171423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.171431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.171756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.171766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.172055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.172064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.172402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.172410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.172578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.172588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.172912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.172922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.173265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.173273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.173592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.173601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.173924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.173934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.174256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.174266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.174584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.174592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.174923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.174932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.175283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.175290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.175619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.175629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.175973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.175982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.176299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.176309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.176519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.176528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.176866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.176876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.177045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.177055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.177441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.177449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.177781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.177790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.178096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.178105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.178434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.178443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.178817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.178825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.179131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.179139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.179442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.179452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.179786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.179795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.180111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.180119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.180439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.180447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.180775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.180783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.181111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.181121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.181443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.181454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.181856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.181866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.182196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.182203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.182525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.182533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.182849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.182857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.183178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.183186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.183519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.183527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.183719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.183727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.184105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.184112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.184436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.184444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.184705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.184712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.184920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.184930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.185247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.185254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.185562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.185569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.185899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.185907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.186234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.186241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.186604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.186611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.186930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.186938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.187165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.187172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.187509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.187517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.187740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.187754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.392 [2024-11-20 06:45:44.188090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.392 [2024-11-20 06:45:44.188097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.392 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.188418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.188426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.188747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.188754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.189076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.189084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.189428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.189436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.189754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.189761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.190121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.190130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.190453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.190463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.190808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.190816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.191220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.191228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.191568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.191576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.191904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.191911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.192232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.192239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.192612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.192620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.192915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.192923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.193161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.193168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.193504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.193512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.193807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.193815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.194139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.194147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.194459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.194467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.194784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.194794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.195105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.195112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.195282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.195291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.195632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.195639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.195874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.195882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.196249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.196257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.196577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.196585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.196910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.196918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.197239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.197246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.197536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.197543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.197858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.197866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.198194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.198201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.198526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.198534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.198861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.198869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.199193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.199200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.199370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.199379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.199583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.199591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.199908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.199917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.200206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.200214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.200424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.200431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.200708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.200716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.201046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.201054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.201385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.201392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.201697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.201705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.202023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.202032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.202347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.202354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.202646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.202654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.202970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.202980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.203300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.203308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.203631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.203640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.204004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.204012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.204336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.204344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.204661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.204669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.204990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.204999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.205326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.205335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.205656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.205664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.205978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.205988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.206310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.206319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.206623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.206632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.206810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.206819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.207140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.207149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.207470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.207479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.207803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.207812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.207991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.207998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.208314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.208322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.208656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.208664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.208981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.208989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.209311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.209318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.209635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.209643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.209858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.209867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.210150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.210158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.210476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.210483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.210798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.210806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.211134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.393 [2024-11-20 06:45:44.211141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.393 qpair failed and we were unable to recover it. 00:34:24.393 [2024-11-20 06:45:44.211450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.211457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.211767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.211775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.212089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.212097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.212423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.212430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.212647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.212654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.213012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.213020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.213346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.213353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.213523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.213532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.213835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.213843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.214180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.214189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.214499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.214507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.214830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.214839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.215165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.215174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.215502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.215510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.215702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.215713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.216004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.216013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.216355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.216363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.216568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.216576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.216904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.216912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.217072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.217081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.217495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.217504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.217818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.217827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.218026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.218035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.218362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.218371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.218679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.218688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.218984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.219342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.219352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.219673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.219682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.219990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.219999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.220323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.220333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.220651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.220660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.220982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.220992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.221228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.221238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.221569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.221577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.221774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.221783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.222072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.222081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.222249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.222259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.222583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.222592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.222930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.222939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.223264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.223273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.223592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.223601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.223914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.223923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.224147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.224156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.224483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.224492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.224693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.224702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.225032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.225042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.225387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.225395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.225724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.225733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.225918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.225927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.226284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.226292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.226627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.226636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.226994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.227002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.227308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.227315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.227643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.227650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.228062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.228070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.228407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.228415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.228753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.228761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.229092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.229100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.229453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.229460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.229787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.229795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.230138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.230146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.230458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.230466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.230788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.230797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.231113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.231120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.231427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.231434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.231757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.231764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.231984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.231992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.232336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.232343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.232682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.232690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.233082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.233089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.394 qpair failed and we were unable to recover it. 00:34:24.394 [2024-11-20 06:45:44.233409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.394 [2024-11-20 06:45:44.233417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.233748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.233757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.234068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.234076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.234437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.234444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.234757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.234765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.235089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.235097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.235407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.235414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.235743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.235754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.236079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.236087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.236414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.236422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.236744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.236756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.237060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.237069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.237415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.237425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.237756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.237765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.238088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.238096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.238418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.238426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.238773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.238781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.238969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.238977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.239182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.239190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.239484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.239491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.239851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.239860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.240184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.240191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.240495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.240503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.240827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.240834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.241216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.241223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.241543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.241551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.241887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.241894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.242229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.242236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.242397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.242406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.242695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.242702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.243029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.243037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.243448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.243455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.243769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.243777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.244122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.244129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.244341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.244348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.244702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.244709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.245030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.245048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.245438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.245446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.245758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.245765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.246077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.246084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.246396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.246404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.246726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.246733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.246980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.246988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.247313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.247320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.247634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.247643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.247971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.247980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.248296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.248304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.248623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.248631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.248924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.248932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.249230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.249238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.249478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.249485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.249820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.249827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.250136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.250144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.250465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.250474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.250804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.250812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.251150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.251157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.251533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.251540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.251868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.251876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.252196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.252203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.252521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.252528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.252720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.252727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.253057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.253064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.253384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.253392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.253716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.253723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.254052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.254060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.254386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.254394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.254597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.254603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.254676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.254683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.395 [2024-11-20 06:45:44.255023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.395 [2024-11-20 06:45:44.255032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.395 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.255349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.255357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.255691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.255700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.256040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.256049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.256367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.256374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.256644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.256652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.256830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.256839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.257147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.257155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.257483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.257492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.257811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.257818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.258113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.258121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.258355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.258363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.258697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.258707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.259034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.259042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.259364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.259371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.259697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.259705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.260033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.260041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.260334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.260341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.260710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.260718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.261119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.261127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.261438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.261446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.261579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.261586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.261896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.261904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.262076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.262084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.262402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.262410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.396 [2024-11-20 06:45:44.262795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.396 [2024-11-20 06:45:44.262803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.396 qpair failed and we were unable to recover it. 00:34:24.670 [2024-11-20 06:45:44.263104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.670 [2024-11-20 06:45:44.263116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.670 qpair failed and we were unable to recover it. 00:34:24.670 [2024-11-20 06:45:44.263432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.670 [2024-11-20 06:45:44.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.670 qpair failed and we were unable to recover it. 00:34:24.670 [2024-11-20 06:45:44.263727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.670 [2024-11-20 06:45:44.263735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.670 qpair failed and we were unable to recover it. 00:34:24.670 [2024-11-20 06:45:44.264044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.670 [2024-11-20 06:45:44.264053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.670 qpair failed and we were unable to recover it. 00:34:24.670 [2024-11-20 06:45:44.264380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.670 [2024-11-20 06:45:44.264389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.670 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.264805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.264814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.265206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.265214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.265517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.265526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.265805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.265814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.266140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.266151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.266470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.266478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.266788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.266797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.267146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.267153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.267463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.267471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.267680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.267689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.267981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.267988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.268324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.268332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.268656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.268665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.268998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.269007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.269370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.269377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.269701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.269708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.269875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.269884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.270204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.270212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.270519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.270526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.270894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.270902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.271198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.271206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.271549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.271557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.271879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.271896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.272215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.272223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.272546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.272553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.272747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.272759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.273056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.273065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.273384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.273392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.273721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.273729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.671 [2024-11-20 06:45:44.274064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.671 [2024-11-20 06:45:44.274073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.671 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.274384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.274391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.274717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.274724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.275090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.275099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.275398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.275406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.275713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.275722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.276065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.276073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.276375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.276383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.276574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.276584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.276884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.276892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.277223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.277231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.277557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.277564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.277890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.277898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.278113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.278122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.278437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.278445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.278767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.278776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.279111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.279119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.279480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.279488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.279809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.279817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.280129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.280137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.280464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.280474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.280860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.280868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.281165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.281173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.281490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.281499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.281708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.281717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.282032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.282041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.282366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.282375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.282699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.282706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.283021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.283029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.283376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.283383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.283597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.283606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.283966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.283976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.284305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.284312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.284553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.672 [2024-11-20 06:45:44.284561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.672 qpair failed and we were unable to recover it. 00:34:24.672 [2024-11-20 06:45:44.284756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.284764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.285092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.285099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.285423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.285431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.285827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.285835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.286094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.286102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.286447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.286454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.286816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.286825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.287159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.287166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.287480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.287488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.287800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.287808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.288102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.288110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.288445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.288452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.288782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.288791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.289125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.289133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.289453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.289462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.289687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.289696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.289881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.289891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.290192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.290200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.290530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.290538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.290846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.290854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.291161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.291169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.291492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.291499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.291825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.291833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.292139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.292147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.292451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.292458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.292770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.292778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.293096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.293104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.293453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.293462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.293875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.293884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.294137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.294146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.294453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.673 [2024-11-20 06:45:44.294461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.673 qpair failed and we were unable to recover it. 00:34:24.673 [2024-11-20 06:45:44.294793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.294801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.295123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.295131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.295445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.295454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.295740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.295752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.296053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.296061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.296384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.296391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.296695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.296703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.297029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.297037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.297357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.297365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.297678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.297686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.298004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.298012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.298339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.298346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.298667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.298676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.298992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.299000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.299324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.299332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.299652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.299660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.299864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.299873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.300180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.300189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.300386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.300394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.300682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.300690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.301024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.301032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.301358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.301366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.301578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.301586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.301943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.301955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.302142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.302150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.302449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.302456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.302679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.302687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.302866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.302875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.303180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.303188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.303522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.303530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.674 [2024-11-20 06:45:44.303817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.674 [2024-11-20 06:45:44.303825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.674 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.304012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.304019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.304325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.304332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.304648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.304656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.304978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.304986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.305305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.305312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.305626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.305634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.305876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.305885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.306262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.306270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.306474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.306481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.306848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.306856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.307191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.307200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.307506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.307515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.307840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.307848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.308166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.308174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.308387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.308397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.308740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.308754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.309069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.309077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.309399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.309407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.309726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.309734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.310069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.310076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.310408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.310416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.310736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.310743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.311153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.311160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.311386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.311394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.311672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.311679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.311992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.312000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.312314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.312322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.312648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.312658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.312999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.313007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.313405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.313414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.313783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.313792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.314102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.675 [2024-11-20 06:45:44.314109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.675 qpair failed and we were unable to recover it. 00:34:24.675 [2024-11-20 06:45:44.314372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.314380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.314710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.314719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.314954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.314962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.315301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.315308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.315489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.315495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.315787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.315795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.316136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.316143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.316458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.316465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.316794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.316801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.317050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.317058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.317377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.317384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.317716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.317724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.318046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.318053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.318360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.318368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.318688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.318696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.319092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.319101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.319424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.319431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.319829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.319837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.320174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.320181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.320510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.320518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.320842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.320850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.321157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.321164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.676 qpair failed and we were unable to recover it. 00:34:24.676 [2024-11-20 06:45:44.321494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.676 [2024-11-20 06:45:44.321501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.321816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.321824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.322165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.322173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.322478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.322485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.322698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.322707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.322982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.322989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.323322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.323330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.323661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.323670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.324026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.324034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.324322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.324330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.324537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.324546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.324852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.324860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.325253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.325260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.325588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.325596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.325928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.325935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.326244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.326252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.326577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.326584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.326920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.326927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.327301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.327308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.327611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.327620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.328030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.328037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.328347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.328355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.328707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.328714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.329038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.329047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.329371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.329379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.329573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.329579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.329884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.329893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.330087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.330096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.330426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.677 [2024-11-20 06:45:44.330434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.677 qpair failed and we were unable to recover it. 00:34:24.677 [2024-11-20 06:45:44.330757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.330765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.331089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.331096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.331422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.331430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.331760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.331768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.332088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.332095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.332426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.332434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.332740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.332754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.333067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.333074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.333401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.333409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.333720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.333729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.334052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.334059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.334381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.334388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.334711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.334720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.334992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.335001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.335324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.335332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.335646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.335655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.335996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.336006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.336324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.336334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.336650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.336663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.337001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.337011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.337338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.337347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.337524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.337534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.337856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.337863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.338200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.338208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.338422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.338431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.338770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.338778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.339141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.339150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.339468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.339475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.339802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.339809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.340013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.340023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.340323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.340331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.340649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.340657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.341001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.341009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.341329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.341337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.678 [2024-11-20 06:45:44.341669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.678 [2024-11-20 06:45:44.341677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.678 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.342016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.342025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.342223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.342231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.342575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.342583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.342919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.342928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.343269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.343277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.343587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.343597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.343927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.343936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.344236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.344243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.344557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.344564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.344883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.344890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.345203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.345211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.345567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.345576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.345910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.345918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.346231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.346239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.346568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.346575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.346885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.346893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.347213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.347220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.347547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.347555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.347881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.347889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.348241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.348250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.348542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.348550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.348871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.348879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.349216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.349223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.349588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.349597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.349925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.349933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.350262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.350269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.350489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.350496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.350694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.350701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.351010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.351018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.351334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.351341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.351557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.351564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.679 qpair failed and we were unable to recover it. 00:34:24.679 [2024-11-20 06:45:44.351909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.679 [2024-11-20 06:45:44.351916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.352230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.352237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.352446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.352454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.352645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.352653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.352996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.353004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.353326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.353333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.353644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.353652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.353981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.353989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.354309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.354317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.354640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.354648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.354835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.354844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.355149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.355156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.355480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.355488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.355677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.355686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.355731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.355739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.356058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.356067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.356387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.356395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.356708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.356716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.356924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.356933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.357210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.357219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.357508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.357518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.357826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.357834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.358157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.358165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.358472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.358479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.358790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.358798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.359130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.359137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.359448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.359455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.359776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.359784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.360093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.360101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.360425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.360432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.360754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.360762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.361081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.680 [2024-11-20 06:45:44.361088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.680 qpair failed and we were unable to recover it. 00:34:24.680 [2024-11-20 06:45:44.361410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.361418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.361639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.361647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.361961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.361968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.362300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.362308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.362626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.362633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.362970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.362979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.363311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.363318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.363696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.363703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.363985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.363993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.364343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.364350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.364628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.364635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.364869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.364878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.365224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.365231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.365540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.365548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.365850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.365859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.366197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.366205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.366519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.366527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.366823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.366831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.367156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.367165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.367577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.367585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.367813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.367822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.368139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.368148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.368469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.368477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.368801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.368810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.369144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.369152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.369485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.369494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.369681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.369690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.369945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.369953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.370167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.370176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.370393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.370406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.370730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.370739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.681 [2024-11-20 06:45:44.371102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.681 [2024-11-20 06:45:44.371112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.681 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.371301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.371309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.371683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.371690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.372050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.372058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.372226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.372234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.372582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.372589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.372955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.372964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.373290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.373301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.373633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.373642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.373930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.373939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.374180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.374189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.374500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.374511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.374846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.374855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.375150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.375159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.375480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.375487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.375909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.375925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.376268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.376486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.376495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.376842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.376852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.377180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.377188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.377515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.377528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.377734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.377742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.378033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.378042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.378355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.378363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.378661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.378669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.378845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.378857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.379194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.379498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.379507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.682 [2024-11-20 06:45:44.379737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.682 [2024-11-20 06:45:44.379748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.682 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.380065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.380072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.380395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.380405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.380755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.380765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.381077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.381085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.381387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.381398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.381600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.381609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.381816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.381827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.382116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.382125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.382434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.382444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.382757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.382767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.383082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.383091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.383372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.383381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.383682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.383690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.384014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.384023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.384323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.384332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.384556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.384565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.384921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.384930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.385249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.385257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.385609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.385619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.385959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.385968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.386164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.386174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.386538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.386546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.386895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.386904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.387199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.387209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.387538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.387547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.387867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.387876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.388286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.388294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.388675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.388684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.389021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.389029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.389365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.389373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.389701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.389710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.390029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.683 [2024-11-20 06:45:44.390038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.683 qpair failed and we were unable to recover it. 00:34:24.683 [2024-11-20 06:45:44.390373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.390382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.390707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.390717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.391013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.391021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.391336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.391345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.391672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.391680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.391883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.391895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.392265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.392273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.392602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.392611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.392811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.392821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.393128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.393136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.393494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.393504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.393825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.393834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.394167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.394175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.394493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.394501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.394827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.394836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.395161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.395170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.395496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.395506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.395822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.395831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.396125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.396133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.396452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.396461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.396782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.396792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.397013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.397022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.397359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.397368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.397691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.397700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.398030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.398038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.398357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.398366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.398691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.398699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.399042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.399051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.399377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.399388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.399703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.399713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.399903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.684 [2024-11-20 06:45:44.399913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.684 qpair failed and we were unable to recover it. 00:34:24.684 [2024-11-20 06:45:44.400224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.400233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.400540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.400553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.400869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.400879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.401186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.401194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.401573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.401583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.401908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.401917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.402236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.402246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.402595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.402634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.402871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.402893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.403181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.403205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.403543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.403565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.403886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.403898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.404182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.404191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.404524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.404531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.404753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.404763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.405103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.405112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.405431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.405439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.405768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.405775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.406175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.406186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.406421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.406432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.406757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.406767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.407140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.407150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.407350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.407362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.407691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.407703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.408041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.408059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.408479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.408496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.408799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.408820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.409066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.409073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.409214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.409223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.409351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.409359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.409675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.685 [2024-11-20 06:45:44.409683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.685 qpair failed and we were unable to recover it. 00:34:24.685 [2024-11-20 06:45:44.410009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.410018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.410346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.410354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.410685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.410695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.410993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.411001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.411371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.411380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.411706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.411715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.411942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.411950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.412267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.412277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.412591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.412600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.412809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.412820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.413018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.413044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.413344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.413360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.413730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.413741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.414072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.414082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.414282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.414291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.414623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.414637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.415053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.415087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.415428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.415437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.415772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.415780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.416081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.416090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.416430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.416437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.416770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.416779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.417105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.417113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.417423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.417430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.417755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.417765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.418124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.418131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.418462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.418470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.418793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.418803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.419129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.419138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.419423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.419436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.419773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.419802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.420185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.686 [2024-11-20 06:45:44.420198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.686 qpair failed and we were unable to recover it. 00:34:24.686 [2024-11-20 06:45:44.420426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.420435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.420827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.420835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.421157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.421164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.421504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.421513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.421824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.421834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.422167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.422174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.422505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.422517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.422866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.422875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.423210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.423218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.423435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.423444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.423765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.423775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.424121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.424150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.424518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.424528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.424827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.424835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.425174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.425183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.425416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.425424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.425740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.425763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.425958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.425978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.426334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.426351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.426571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.426580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.426780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.426791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.427193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.427202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.427530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.427539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.427772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.427794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.428090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.428098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.428315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.428325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.428625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.687 [2024-11-20 06:45:44.428635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.687 qpair failed and we were unable to recover it. 00:34:24.687 [2024-11-20 06:45:44.428955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.428962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.429190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.429200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.429527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.429537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.429831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.429840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.430173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.430181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.430499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.430512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.430859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.431210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.431235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.431612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.431622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.431919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.431929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.432278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.432286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.432692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.432701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.433019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.433029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.433344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.433355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.433570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.433577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.433854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.433865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.434283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.434291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.434615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.434623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.435028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.435038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.435239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.435249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.435527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.435546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.435879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.435900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.436118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.436129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.436413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.436422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.436762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.436773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.437023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.437031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.437349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.437362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.437532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.437554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.437815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.437839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.438116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.438125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.438425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.438433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.438774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.438785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.688 qpair failed and we were unable to recover it. 00:34:24.688 [2024-11-20 06:45:44.439039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.688 [2024-11-20 06:45:44.439047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.439373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.439381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.439710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.439719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.440050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.440059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.440377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.440386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.440707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.440716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.441061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.441069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.441391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.441399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.441732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.441740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.442070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.442083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.442402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.442411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.442728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.442737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.443057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.443066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.443366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.443375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.443576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.443585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.443871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.443879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.444217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.444226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.444539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.444548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.444915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.444927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.445202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.445210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.445532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.445540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.445779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.445788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.446140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.446148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.446477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.446485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.446899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.446908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.447225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.447233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.447433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.447443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.447666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.447675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.448007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.448015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.448336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.448344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.448669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.448679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.448997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.449006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.449353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.689 [2024-11-20 06:45:44.449360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.689 qpair failed and we were unable to recover it. 00:34:24.689 [2024-11-20 06:45:44.449673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.449681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.449929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.449937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.450328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.450337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.450662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.450669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.450986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.450994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.451230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.451238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.451570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.451577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.451857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.451868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.452248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.452257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.452565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.452573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.452901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.452909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.453216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.453227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.453543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.453550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.453780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.453788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.454117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.454127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.454315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.454323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.454722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.454730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.455058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.455066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.455395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.455403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.455723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.455732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.456070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.456082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.456418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.456427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.456637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.456644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.456895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.456907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.457249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.457258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.457552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.457562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.457893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.457901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.458212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.458222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.458542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.458550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.458880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.458887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.459226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.459233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.690 [2024-11-20 06:45:44.459557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.690 [2024-11-20 06:45:44.459565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.690 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.459886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.459894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.460308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.460317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.460636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.460644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.460944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.460955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.461149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.461158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.461464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.461474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.461794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.461804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.462154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.462161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.462471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.462481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.462807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.462819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.463244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.463267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.463589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.463597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.463920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.463928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.464145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.464153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.464499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.464507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.464776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.464784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.465235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.465242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.465571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.465580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.465862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.465870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.466181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.466189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.466497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.466505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.466831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.466841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.467183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.467201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.467517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.467528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.467909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.467918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.468247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.468255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.468563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.468571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.468811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.468819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.469186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.469215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.469582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.469592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.469905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.691 [2024-11-20 06:45:44.469913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.691 qpair failed and we were unable to recover it. 00:34:24.691 [2024-11-20 06:45:44.470261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.470271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.470577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.470588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.470784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.470794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.471168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.471176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.471480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.471489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.471815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.471823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.472153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.472161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.472490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.472498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.472819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.472828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.473042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.473050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.473328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.473336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.473668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.473677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.474049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.474059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.474363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.474371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.474698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.474707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.475017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.475024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.475347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.475354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.475679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.475687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.475873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.475881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.476206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.476214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.476571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.476580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.476980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.476989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.477213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.477221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.477555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.477562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.477900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.477909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.478227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.478235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.478556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.478564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.478891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.478901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.479222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.479233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.479561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.479569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-11-20 06:45:44.479950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.692 [2024-11-20 06:45:44.479958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.480288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.480295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.480624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.480631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.480941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.480950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.481274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.481282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.481589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.481597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.481831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.481840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.482028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.482036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.482413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.482421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.482738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.482767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.482968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.482978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.483251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.483258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.483536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.483738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.483755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.483961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.483970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.484203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.484210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.484519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.484528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.484809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.484818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.485151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.485158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.485475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.485484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.485690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.485699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.485902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.485912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.486247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.486258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.486574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.486583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.486912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.486920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.487249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.487256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.487571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.487578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.487796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.487804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-11-20 06:45:44.488207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.693 [2024-11-20 06:45:44.488214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.488546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.488554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.488851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.488860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.489163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.489170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.489507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.489515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.489850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.489859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.490221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.490230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.490576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.490586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.490779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.490789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.491009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.491017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.491195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.491205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.491524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.491534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.491876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.491885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.492184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.492192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.492481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.492488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.492818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.492827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.493232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.493240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.493446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.493454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.493778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.493788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.494083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.494091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.494418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.494426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.494755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.494765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.495073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.495081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.495410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.495419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.495722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.495729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.496038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.496047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.496362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.496371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.496594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.496603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.694 [2024-11-20 06:45:44.496823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.694 [2024-11-20 06:45:44.496833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.694 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.497133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.497141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.497472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.497480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.497800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.497808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.498184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.498192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.498466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.498474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.498799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.498808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.499017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.499025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.499395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.499403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.499736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.499747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.500040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.500088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.500411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.500419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.500722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.500730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.501084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.501092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.501302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.501310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.501637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.501645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.501978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.501987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.502180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.502189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.502527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.502910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.502918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.503220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.503227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.503562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.503570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.503900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.503908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.504330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.504338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.504643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.504651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.504887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.504896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.505280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.505287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.505717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.505727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.505945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.505954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.506230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.506239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.506556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.695 [2024-11-20 06:45:44.506564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.695 qpair failed and we were unable to recover it. 00:34:24.695 [2024-11-20 06:45:44.506824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.506833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.507080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.507089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.507277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.507287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.507597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.507605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.507899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.507908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.508231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.508239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.508567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.508575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.508775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.508784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.509138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.509146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.509472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.509480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.509772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.509781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.510107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.510114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.510523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.510532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.510837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.510845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.511172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.511180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.511498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.511506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.511835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.511844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.512199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.512206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.512526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.512535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.512859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.512867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.513268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.513280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.513596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.513603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.513922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.513930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.514258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.514266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.514583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.514593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.514802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.514810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.515095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.515102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.515437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.515444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.515730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.515737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.516056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.516064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.516474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.516486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.516801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.516809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.517231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.696 [2024-11-20 06:45:44.517240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.696 qpair failed and we were unable to recover it. 00:34:24.696 [2024-11-20 06:45:44.517532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.517540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.517872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.517880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.518192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.518200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.518551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.518558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.518881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.518889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.519194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.519204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.519609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.519618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.519926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.519933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.520257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.520265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.520595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.520603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.520924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.520931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.521330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.521340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.521655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.521664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.521877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.521886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.522223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.522232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.522567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.522575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.522879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.522887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.523185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.523193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.523383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.523391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.523601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.523610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.523946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.523955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.524168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.524175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.524365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.524373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.524774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.524782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.525119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.525126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.525458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.525465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.525795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.525803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.526147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.526156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.526456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.526465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.526735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.526744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.527074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.527082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.527431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.527440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.527761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.697 [2024-11-20 06:45:44.527769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.697 qpair failed and we were unable to recover it. 00:34:24.697 [2024-11-20 06:45:44.528076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.528084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.528405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.528414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.528735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.528744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.529129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.529136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.529445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.529453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.529791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.529798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.530102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.530110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.530434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.530443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.530771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.530782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.531109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.531117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.531492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.531501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.531884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.531892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.532196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.532204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.532421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.532428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.532752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.532762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.533085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.533093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.533416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.533423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.533619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.533627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.533963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.533970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.534162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.534171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.534529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.534537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.534870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.534878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.535203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.535213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.535520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.535528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.535722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.535732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.536063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.536071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.698 [2024-11-20 06:45:44.536252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.698 [2024-11-20 06:45:44.536261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.698 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.536625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.536633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.537006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.537016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.537339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.537346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.537744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.537757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.538089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.538096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.538403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.538412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.538733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.538741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.539067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.539074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.539414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.539421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.539755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.539766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.540074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.540082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.540279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.540286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.540640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.540648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.540970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.540978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.541143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.541152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.541593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.541603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.541910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.541918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.542271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.542280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.542591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.542599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.542906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.542913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.543230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.543238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.543566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.543573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.543882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.543891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.544207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.544214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.544529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.544538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.544867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.544876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.545210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.545218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.545396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.545405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.545762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.545770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.546091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.546099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.546419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.699 [2024-11-20 06:45:44.546426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.699 qpair failed and we were unable to recover it. 00:34:24.699 [2024-11-20 06:45:44.546732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.546742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.547062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.547072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.547396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.547405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.547708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.547715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.548024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.548032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.548369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.548376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.548696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.548704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.549027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.549037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.549354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.549363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.549687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.549696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.550013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.550020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.550340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.550348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.550685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.550693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.550902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.550912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.551213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.551223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.551436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.551445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.551762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.551771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.552097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.552104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.552424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.552431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.552686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.552693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.553028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.553036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.553356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.553363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.553670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.553678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.554005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.554015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.554389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.554397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.554714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.554722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.555054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.555061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.555381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.555389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.555577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.555585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.700 [2024-11-20 06:45:44.555884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.700 [2024-11-20 06:45:44.555892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.700 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.556198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.556208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.556533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.556541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.556810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.556820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.557229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.557236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.557548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.557556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.557909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.557917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.558229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.558237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.558562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.558571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.558759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.558768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.559106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.559115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.559423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.559430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.559630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.559638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.559958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.559966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.560278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.560286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.560684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.560695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.561025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.561034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.561354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.561362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.561682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.561690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.561910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.561918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.562258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.562267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.562597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.562606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.562802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.562812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.563217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.563227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.563550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.563559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.563879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.563888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.564216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.564224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.564540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.564547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.564895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.564904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.565212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.565220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.565535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.565545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.565869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.565878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.701 qpair failed and we were unable to recover it. 00:34:24.701 [2024-11-20 06:45:44.566206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.701 [2024-11-20 06:45:44.566214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.566559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.566567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.566890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.566898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.567098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.567106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.567438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.567448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.567778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.567788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.568118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.568126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.568361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.568368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.568686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.568694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.569029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.569038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.569358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.569365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.569691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.569700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.570025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.570034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.571287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.571326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.571566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.571576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.571883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.571892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.572235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.572243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.572569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.572576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.702 qpair failed and we were unable to recover it. 00:34:24.702 [2024-11-20 06:45:44.572888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.702 [2024-11-20 06:45:44.572896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.573222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.573232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.573432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.573439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.573725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.573735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.574075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.574083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.574282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.574290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.574658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.574666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.574998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.575006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.575224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.575233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.575553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.575561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.575902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.575912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.576232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.576243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.576564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.576572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.576905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.576913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.978 [2024-11-20 06:45:44.577236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.978 [2024-11-20 06:45:44.577244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.978 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.577574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.577581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.577886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.577896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.578209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.578217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.578543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.578551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.578758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.578767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.579070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.579077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.579408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.579419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.579735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.579743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.580083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.580093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.580281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.580290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.580653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.580665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.580989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.580998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.581388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.581397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.581721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.581729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.582074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.582084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.582408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.582416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.582723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.582731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.583091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.583102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.583446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.583455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.583637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.583646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.583958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.583967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.584335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.584345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.584566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.584575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.584803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.584810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.585040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.585049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.585391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.585399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.585718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.585725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.586129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.586136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.586464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.586473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.586783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.586791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.587112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.587121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.587448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.587457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.587780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.587788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.588106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.588113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.588504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.588513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.588839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.588846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.589171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.589179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.589546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.979 [2024-11-20 06:45:44.589553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.979 qpair failed and we were unable to recover it. 00:34:24.979 [2024-11-20 06:45:44.589863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.589873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.590189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.590197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.590507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.590515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.590845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.590852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.591177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.591185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.591510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.591518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.591823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.591831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.592168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.592177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.592487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.592495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.592823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.592833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.593132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.593140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.593462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.593470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.593797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.593806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.594169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.594179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.594367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.594376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.594622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.594631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.594843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.594852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.595059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.595068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.595414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.595421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.595646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.595654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.595875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.595883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.596205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.596212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.596514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.596523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.596847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.596856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.597179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.597186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.597549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.597556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.597856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.597864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.598212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.598219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.598393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.598402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.598609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.598616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.598968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.598976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.599286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.599295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.599612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.599620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.600036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.600044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.600259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.600266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.600626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.600633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.980 qpair failed and we were unable to recover it. 00:34:24.980 [2024-11-20 06:45:44.600963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.980 [2024-11-20 06:45:44.600974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.601308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.601317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.601674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.601683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.602021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.602029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.602336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.602344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.602669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.602677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.603028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.603035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.603239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.603247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.603613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.603623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.603936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.604266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.604274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.604483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.604491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.604847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.604856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.605165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.605173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.605512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.605520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.605828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.605838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.606173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.606182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.606498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.606506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.606849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.606857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.607178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.607186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.607508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.607515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.607820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.607828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.608017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.608065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.608374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.608384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.608688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.608697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.608992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.609001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.609328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.609336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.609528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.609535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.609874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.609882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.610201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.610209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.610543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.610553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.610881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.610892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.611218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.611227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.611549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.611556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.611864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.611872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.612087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.612095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.612261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.612268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.981 [2024-11-20 06:45:44.612573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.981 [2024-11-20 06:45:44.612582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.981 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.612908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.612917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.613229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.613236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.613421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.613429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.613810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.613822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.614164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.614172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.614493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.614501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.614858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.614868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.615203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.615213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.615535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.615545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.615878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.615888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.616119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.616128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.616366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.616375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.616699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.616708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.617081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.617092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.617272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.617282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.617613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.617624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.617929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.617939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.618284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.618295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.618646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.618655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.618985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.618995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.619313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.619322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.619675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.619686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.619870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.619881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.620172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.620182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.621033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.621066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.621398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.621410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.621758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.621770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.622060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.622068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.622415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.622424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.622836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.622846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.623160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.623171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.623492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.623514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.623765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.623783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.623982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.623992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.624288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.624298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.624618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.624628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.624929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.982 [2024-11-20 06:45:44.624939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.982 qpair failed and we were unable to recover it. 00:34:24.982 [2024-11-20 06:45:44.625327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.625337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.625536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.625545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.625754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.625764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.626171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.626200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.626540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.626550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.626868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.626879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.627199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.627209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.627546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.627554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.627774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.627794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.628116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.628133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.628464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.628474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.628813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.628823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.629165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.629175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.629504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.629514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.629741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.629768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.630038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.630050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.630380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.630389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.630603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.630611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.630921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.630932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.631308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.631318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.631644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.631655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.631970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.632320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.632331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.632643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.632655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.633082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.633093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.633438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.633445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.633712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.633723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.634106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.634115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.634428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.634442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.634774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.634797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.983 [2024-11-20 06:45:44.635020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.983 [2024-11-20 06:45:44.635028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.983 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.635371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.635380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.635712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.635722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.636047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.636055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.636369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.636386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.636733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.636763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.637174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.637182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.637548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.637555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.637896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.637904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.638232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.638240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.638531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.638554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.638874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.638885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.639090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.639099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.639246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.639253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.639448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.639456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.639812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.639821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.639936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.639943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.640293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.640315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.640627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.640638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.640853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.640863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.641283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.641293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.641619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.641627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.642033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.642041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.642363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.642370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.642542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.642552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.642784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.642806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.643206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.643219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.643552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.643561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.643980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.643990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.644313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.644320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.644648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.644674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.644900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.644917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.645151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.645159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.645496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.645503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.645838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.645848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.646147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.646165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.646490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.646504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.646766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.646776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.647180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.984 [2024-11-20 06:45:44.647189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.984 qpair failed and we were unable to recover it. 00:34:24.984 [2024-11-20 06:45:44.647516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.647525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.647880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.647889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.648241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.648251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.648575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.648585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.648808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.648818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.649029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.649038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.649320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.649329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.649568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.649575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.649875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.649885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.650233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.650243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.650562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.650571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.650896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.650904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.651120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.651129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.651489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.651498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.652598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.652631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.653015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.653030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.653351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.653360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.653703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.653712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.654118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.654126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.654502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.654511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.654727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.654736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.655109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.655120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.655464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.655474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.655808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.655816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.656135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.656145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.656466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.656476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.656668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.656678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.657012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.657023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.657333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.657341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.657661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.657670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.657857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.657865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.658214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.658226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.658405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.658411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.658714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.658725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.658952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.985 [2024-11-20 06:45:44.658975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.985 qpair failed and we were unable to recover it. 00:34:24.985 [2024-11-20 06:45:44.659307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.659315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.659655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.659663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.659973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.659983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.660291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.660298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.660616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.660627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.660914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.660924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.661264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.661272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.661590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.661597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.661923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.661933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.662261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.662271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.662592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.662602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.662932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.662944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.663280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.663290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.663619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.663629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.664041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.664051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.664362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.664374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.664686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.664698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.664896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.664906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.665196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.665205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.665562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.665572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.665943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.666255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.666264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.666591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.666601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.666930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.666940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.667259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.667267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.667589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.667598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.667835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.667844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.668243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.668252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.668472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.668481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.668698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.668706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.669051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.669062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.669394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.669403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.669719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.669730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.670045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.670053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.670379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.670388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.670731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.986 [2024-11-20 06:45:44.670741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.986 qpair failed and we were unable to recover it. 00:34:24.986 [2024-11-20 06:45:44.671146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.671156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.671346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.671358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.671686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.671693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.672006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.672019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.672340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.672349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.672685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.672693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.672949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.672957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.673287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.673295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.673605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.673613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.673936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.673944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.674278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.674289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.674642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.674652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.674866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.674877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.675228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.675237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.675568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.675576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.675910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.675920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.676268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.676279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.676643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.676654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.676967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.676977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.677328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.677337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.677665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.677674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.677907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.677916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.678188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.678200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.678542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.678551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.678884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.678894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.679231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.679242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.679422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.679445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.679692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.679711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.680109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.680121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.680349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.680359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.680688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.680701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.681023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.681034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.681383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.681402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.681732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.681750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.682128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.682138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.682448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.682463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.987 [2024-11-20 06:45:44.682811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.987 [2024-11-20 06:45:44.682822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.987 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.683153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.683173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.683511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.683523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.683932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.683943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.684273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.684283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.684573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.684582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.684896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.684918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.685246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.685257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.685612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.685621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.685966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.685975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.686203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.686216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.686455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.686463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.686670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.686679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.687018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.687031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.687239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.687256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.687595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.687605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.687962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.687970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.688292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.688304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.688650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.688658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.688980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.688996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.689381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.689392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.689584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.689592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.689888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.689897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.690238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.690246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.690581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.690590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.690920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.690941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.691171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.691179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.691499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.691508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.691822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.691831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.692191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.692198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.988 [2024-11-20 06:45:44.692529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.988 [2024-11-20 06:45:44.692546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.988 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.692872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.692883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.693249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.693257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.693590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.693597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.693794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.693802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.694185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.694196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.694517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.694525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.694764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.694783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.695088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.695101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.695444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.695453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.695771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.695780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.696145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.696154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.696477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.696485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.696797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.696807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.696919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.696929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.697155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.697162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.697522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.697530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.697796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.697804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.698096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.698104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.698409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.698416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.698718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.698727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.699029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.699037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.699362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.699370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.699698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.699705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.700012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.700020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.700342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.700349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.700674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.700682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.700926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.989 [2024-11-20 06:45:44.700938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.989 qpair failed and we were unable to recover it. 00:34:24.989 [2024-11-20 06:45:44.701287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.701294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.701630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.701639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.701965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.701973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.702285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.702292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.702627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.702637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.702828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.702836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.703232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.703239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.703539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.703547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.703913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.703923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.704258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.704266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.704612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.704619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.704928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.704937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.705271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.705280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.705598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.705607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.705912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.705923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.706239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.706582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.706590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.706908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.706916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.707236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.707244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.707570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.707577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.707799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.707809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.708190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.708200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.708541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.708548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.708965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.708973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.709247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.709254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.709614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.709623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.709934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.990 [2024-11-20 06:45:44.709943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.990 qpair failed and we were unable to recover it. 00:34:24.990 [2024-11-20 06:45:44.710254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.710262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.710584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.710591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.710917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.710926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.711234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.711242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.711552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.711560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.711893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.711902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.712229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.712237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.712429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.712438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.712729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.712738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.713046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.713054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.713236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.713244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.713459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.713466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.713805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.713814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.714159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.714167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.714482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.714490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.714838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.714848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.715173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.715181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.715516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.715524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.715880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.715892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.716085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.716092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.716438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.716447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.716757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.716765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.717066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.717075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.717396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.717404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.717715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.717723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.717925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.717933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.718182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.718190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.991 [2024-11-20 06:45:44.718401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.991 [2024-11-20 06:45:44.718409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.991 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.718742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.718755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.719072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.719082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.719254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.719262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.719584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.719592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.719879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.719889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.720225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.720232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.720538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.720546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.720722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.720730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.721032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.721041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.721379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.721388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.721715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.721724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.722035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.722044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.722370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.722378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.722736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.722748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.723060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.723067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.723395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.723403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.723726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.723733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.724028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.724037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.724356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.724367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.724698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.724706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.725161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.725168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.725394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.725402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.725721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.725728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.726057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.726066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.726261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.726269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.726562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.726571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.992 [2024-11-20 06:45:44.726914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.992 [2024-11-20 06:45:44.726923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.992 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.727240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.727247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.727583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.727591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.727947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.727954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.728289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.728296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.728622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.728631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.728998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.729007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.729397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.729405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.729730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.729737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.730056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.730064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.730264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.730272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.730566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.730574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.730890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.730898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.731218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.731227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.731416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.731424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.731785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.731796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.732139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.732147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.732470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.732477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.732864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.732875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.733177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.733184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.733519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.733529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.733852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.733862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.734018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.734025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.734321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.734330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.734682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.734689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.735005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.735015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.735319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.735326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.993 [2024-11-20 06:45:44.735529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.993 [2024-11-20 06:45:44.735536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.993 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.735897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.735907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.736274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.736281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.736517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.736525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.736864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.736872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.737064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.737073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.737418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.737427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.737761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.737771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.738146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.738155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.738459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.738467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.738796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.738805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.739071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.739079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.739415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.739422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.739733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.739753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.740104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.740113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.740422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.740429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.740794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.740803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.741056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.741065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.741380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.741388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.741712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.741719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.742058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.742066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.742371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.742378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.742697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.742707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.743023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.743031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.994 qpair failed and we were unable to recover it. 00:34:24.994 [2024-11-20 06:45:44.743248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.994 [2024-11-20 06:45:44.743257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.743569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.743576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.743768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.743776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.744109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.744119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.744435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.744443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.744771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.744781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.745123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.745130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.745451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.745459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.745784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.745794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.746090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.746098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.746421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.746430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.746754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.746763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.747100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.747108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.747409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.747417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.747741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.747752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.748063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.748071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.748396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.748404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.748813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.748821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.749189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.749198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.749517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.749525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.749842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.749850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.750170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.750177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.750510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.750518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.750840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.750848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.751175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.751183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.751505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.751515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.751834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.751843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.752171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.995 [2024-11-20 06:45:44.752179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.995 qpair failed and we were unable to recover it. 00:34:24.995 [2024-11-20 06:45:44.752527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.752535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.752868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.752877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.753211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.753220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.753537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.753547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.753861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.754178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.754185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.754512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.754520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.754856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.754865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.755188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.755197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.755508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.755517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.755700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.755708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.756049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.756058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.756387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.756394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.756731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.756738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.757156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.757164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.757487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.757496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.757824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.757833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.758157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.758164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.758482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.758491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.758810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.758819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.758979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.758987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.759329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.759339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.759659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.759666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.759754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.759762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.760052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.760060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.760374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.760381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.760671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.760681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.761026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.761034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.761381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.761389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.761712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.761719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.762079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.996 [2024-11-20 06:45:44.762088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.996 qpair failed and we were unable to recover it. 00:34:24.996 [2024-11-20 06:45:44.762411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.762418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.762749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.762759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.763115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.763124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.763452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.763460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.763832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.763841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.764151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.764158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.764481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.764488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.764818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.764828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.765161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.765170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.765515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.765522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.765870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.765877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.766199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.766207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.766532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.766539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.766861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.766870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.767162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.767170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.767495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.767504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.767825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.767834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.768149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.768157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.768481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.768488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.768823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.768832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.769042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.769051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.769378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.769386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.769707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.769717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.769925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.769933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.770084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.770092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.770403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.770412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.770719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.770728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.771081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.771088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.771418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.771426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.771751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.771762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.772094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.772102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.772411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.772420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.772733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.772740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.773055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.773063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.773386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.997 [2024-11-20 06:45:44.773393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.997 qpair failed and we were unable to recover it. 00:34:24.997 [2024-11-20 06:45:44.773719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.773729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.774044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.774053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.774280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.774288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.774637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.774645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.774975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.774983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.775313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.775320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.775631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.775639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.775968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.775977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.776187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.776196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.776461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.776471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.776797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.776807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.777042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.777050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.777352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.777359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.777702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.777709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.778039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.778048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.778376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.778384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.778721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.778731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.779091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.779100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.779290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.779298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.779622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.779630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.780321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.780329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.780640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.780660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.781030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.781370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.781382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.781701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.781709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.782028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.782037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.782345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.782353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.782674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.782683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.783015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.783024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.783343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.783351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.783672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.783680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.783874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.783884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.784242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.784249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.784562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.784569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.784892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.784901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.998 [2024-11-20 06:45:44.785224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.998 [2024-11-20 06:45:44.785233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.998 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.785553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.785560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.785931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.785939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.786256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.786264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.786584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.786593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.786921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.786930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.787250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.787257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.787578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.787587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.787915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.787924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.788138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.788146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.788406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.788413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.788728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.788736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.789059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.789067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.789389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.789396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.789799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.789810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.790115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.790124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.790435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.790443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.790717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.790726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.791036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.791046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.791364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.791373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.791651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.791661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.791966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.791975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.792313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.792322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.792665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.792672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.792984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.792992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.793315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.793323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.793651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.793659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.793980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.793987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.794310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.794320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.794666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.794676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.795005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.795014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.795335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.795343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:24.999 [2024-11-20 06:45:44.795672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.999 [2024-11-20 06:45:44.795681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:24.999 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.795878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.795887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.796206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.796213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.796536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.796546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.796867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.796876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.797103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.797110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.797450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.797458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.797801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.797809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.798161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.798169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.798419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.798427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.798629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.798638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.798920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.798928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.799269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.799277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.799605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.799612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.799926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.799933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.800156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.800167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.800473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.800481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.800817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.800827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.801144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.801153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.801476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.801483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.801821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.801829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.802179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.802186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.802584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.802591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.802915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.802923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.803247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.803259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.803482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.803489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.803889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.803897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.804216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.804224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.804581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.804590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.804908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.804917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.805155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.805162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.805466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.805476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.805803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.805812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.806136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.806143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.806466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.806473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.806851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.806860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.807204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.000 [2024-11-20 06:45:44.807213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.000 qpair failed and we were unable to recover it. 00:34:25.000 [2024-11-20 06:45:44.807402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.807410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.807730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.807739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.808078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.808086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.808390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.808398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.808598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.808605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.808928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.808936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.809258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.809265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.809586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.809596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.809924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.809933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.810254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.810262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.810562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.810569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.810886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.810894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.811171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.811178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.811505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.811513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.811857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.811866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.812081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.812088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.812433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.812442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.812761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.812770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.813091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.813098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.813420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.813428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.813751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.813759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.813968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.813975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.814335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.814343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.814704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.814714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.815032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.815040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.815364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.815372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.815713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.815722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.816049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.816057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.816379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.816388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.816714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.816724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.816988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.816997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.817333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.817342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.817656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.817664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.818024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.818034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.818320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.818329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.818657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.818668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.818862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.001 [2024-11-20 06:45:44.818871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.001 qpair failed and we were unable to recover it. 00:34:25.001 [2024-11-20 06:45:44.819196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.819206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.819549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.819557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.819883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.819891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.820215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.820222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.820541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.820549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.820870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.820879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.821203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.821212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.821594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.821602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.821959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.821967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.822286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.822293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.822626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.822634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.822975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.822982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.823290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.823298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.823617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.823628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.823946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.823956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.824285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.824292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.824608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.824616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.824926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.824934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.825259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.825268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.825589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.825598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.825925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.825933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.826252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.826260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.826583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.826591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.826907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.826916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.827247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.827255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.827581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.827589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.827908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.827916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.828239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.828248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.828574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.828582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.828903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.828911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.829233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.829241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.829582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.829590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.829899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.829908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.830231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.830240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.830558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.830569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.830756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.830764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.830974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.002 [2024-11-20 06:45:44.830984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.002 qpair failed and we were unable to recover it. 00:34:25.002 [2024-11-20 06:45:44.831317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.831324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.831525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.831533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.831927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.831935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.832260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.832270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.832599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.832606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.832811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.832822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.833117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.833125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.834214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.834247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.834593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.834606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.834933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.834942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.835278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.835286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.835606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.835615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.835933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.835942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.836246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.836255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.836611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.836621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.836946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.836955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.837305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.837314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.837647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.837655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.837987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.837998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.838339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.838348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.838680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.838688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.839010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.839018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.839338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.839350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.839668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.839677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.839971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.839980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.840302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.840310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.840526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.840534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.840761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.840770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.840952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.840960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.841274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.841287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.841572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.841582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.841902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.841915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.842266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.842276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.842627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.842636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.842977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.842986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.843339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.843348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.003 [2024-11-20 06:45:44.843669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.003 [2024-11-20 06:45:44.843677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.003 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.844025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.844034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.844328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.844337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.844666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.844674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.845028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.845036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.845356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.845364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.845687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.845696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.846018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.846027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.846355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.846363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.846689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.846698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.846952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.846964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.847291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.847301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.847624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.847634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.847851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.847864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.848202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.848210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.848527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.848536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.848859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.848869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.849194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.849203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.849523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.849531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.849758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.849767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.850089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.850097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.850425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.850442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.850770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.850779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.851083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.851092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.851396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.851405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.851819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.851830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.852154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.852163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.852506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.852516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.852819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.852827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.853029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.853038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.853222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.853232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.853439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.853449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.004 [2024-11-20 06:45:44.853778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.004 [2024-11-20 06:45:44.853787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.004 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.854098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.854112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.854475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.854483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.854798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.854809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.855143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.855154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.855572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.855581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.855906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.855914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.856241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.856262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.856603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.856625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.856908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.856917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.857245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.857253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.857568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.857576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.857908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.857919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.858258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.858266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.858598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.858609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.858922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.858930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.859250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.859258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.859568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.859576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.859918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.859926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.860243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.860251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.860573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.860581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.860922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.860933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.861254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.861265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.861584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.861594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.861909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.861919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.862247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.862255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.862573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.862581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.862779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.862791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.863149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.863158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.863476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.863485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.863799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.863809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.864122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.864130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.864371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.864380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.864704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.864711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.865032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.865040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.865394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.005 [2024-11-20 06:45:44.865402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.005 qpair failed and we were unable to recover it. 00:34:25.005 [2024-11-20 06:45:44.865734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.865751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.866064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.866072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.866363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.866371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.866700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.866708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.867099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.867107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.867436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.867445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.867783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.867791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.868124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.868134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.868487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.868498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.868824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.868833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.869159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.869168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.869488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.869498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.869815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.869826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.870042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.870050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.870263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.870270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.870641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.870650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.870958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.870966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.871140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.871149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.871470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.871478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.871816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.871824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.872157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.872165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.872506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.872516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.872707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.872718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.873043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.873052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.873379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.873387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.873708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.873716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.874068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.874079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.874486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.874497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.874797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.874806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.875140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.875148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.875459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.875466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.875789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.875799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.876116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.876124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.876324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.876338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.876658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.876666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.006 [2024-11-20 06:45:44.876974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.006 [2024-11-20 06:45:44.876984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.006 qpair failed and we were unable to recover it. 00:34:25.007 [2024-11-20 06:45:44.877315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.007 [2024-11-20 06:45:44.877325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.007 qpair failed and we were unable to recover it. 00:34:25.007 [2024-11-20 06:45:44.877537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.007 [2024-11-20 06:45:44.877546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.007 qpair failed and we were unable to recover it. 00:34:25.007 [2024-11-20 06:45:44.877906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.007 [2024-11-20 06:45:44.877915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.007 qpair failed and we were unable to recover it. 00:34:25.007 [2024-11-20 06:45:44.878258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.007 [2024-11-20 06:45:44.878265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.007 qpair failed and we were unable to recover it. 00:34:25.007 [2024-11-20 06:45:44.878469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.007 [2024-11-20 06:45:44.878477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.007 qpair failed and we were unable to recover it. 00:34:25.007 [2024-11-20 06:45:44.878817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.007 [2024-11-20 06:45:44.878826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.007 qpair failed and we were unable to recover it. 00:34:25.007 [2024-11-20 06:45:44.879144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.007 [2024-11-20 06:45:44.879152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.007 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.879482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.879494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.879822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.879834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.880922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.880955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.881299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.881310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.881644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.881652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.882595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.882626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.882969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.882990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.883361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.883370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.883675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.883682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.883905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.883915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.884242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.884249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.884567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.884579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.884907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.884914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.885221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.885229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.885539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.885547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.885841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.885852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.886175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.886185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.886382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.886391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.886742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.886767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.887109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.887117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.887438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.887446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.887640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.887653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.887970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.887981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.888300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.888309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.888638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.888647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.888977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.888986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.889303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.889311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.889640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.889649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.889934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.889944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.890159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.890168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.890516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.890526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.890851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.890860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.891267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.281 [2024-11-20 06:45:44.891277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.281 qpair failed and we were unable to recover it. 00:34:25.281 [2024-11-20 06:45:44.891479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.891487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.891658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.891670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.892001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.892010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.892193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.892203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.892533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.892542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.892868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.892877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.893214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.893221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.893536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.893545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.893877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.894210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.894218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.894539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.894549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.894874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.894885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.895224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.895236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.895582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.895592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.896628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.896659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.897004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.897016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.897384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.897392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.897689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.897697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.898043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.898051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.898378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.898388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.898694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.898704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.899060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.899070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.899387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.899396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.899620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.899629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.899926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.899937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.900277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.900285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.900602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.900610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.900919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.900928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.901260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.901269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.901613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.901624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.901917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.901928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.902158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.902166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.902527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.902536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.902866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.902875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.903209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.903217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.903451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.903459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.903797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.903806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.904105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.282 [2024-11-20 06:45:44.904113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.282 qpair failed and we were unable to recover it. 00:34:25.282 [2024-11-20 06:45:44.904347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.904356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.904686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.904696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.904977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.904987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.905237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.905583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.905591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.905890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.905899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.906193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.906202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.906507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.906518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.906801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.906814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.907097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.907104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.907339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.907346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.907604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.907612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.907920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.907929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.908262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.908269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.908465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.908472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.908828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.908838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.909988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.910020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.910317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.910328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.910528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.910540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.910849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.910858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.911180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.911189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.911539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.911547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.911846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.911856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.912186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.912195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.912513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.912521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.912852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.912861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.913279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.913289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.913619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.913629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.914720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.914773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.915116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.915127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.916137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.916167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.916518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.916528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.916681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.916689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.917047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.917056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.917367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.917376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.917558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.917569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.917860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.917870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.918108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.918118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.283 [2024-11-20 06:45:44.918466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.283 [2024-11-20 06:45:44.918478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.283 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.918803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.918814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.919146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.919156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.919467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.919488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.919836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.919862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.920112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.920127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.920472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.920483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.920662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.920671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.921024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.921036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.921356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.921365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.921719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.921730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.922078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.922091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.922409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.922417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.922754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.922763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.922987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.922995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.923328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.923337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.923663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.923672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.924096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.924106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.924445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.924454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.924780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.924789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.925004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.925012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.925333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.925342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.925688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.925698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.925926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.925937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.926270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.926279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.926486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.926496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.926800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.926810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.927136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.927144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.927511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.927519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.927851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.927862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.928177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.928185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.928495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.928504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.928823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.928831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.929156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.929165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.929480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.929489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.929824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.929833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.930176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.930184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.930532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.930542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.284 [2024-11-20 06:45:44.930875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.284 [2024-11-20 06:45:44.930887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.284 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.931180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.931188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.931406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.931417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.931782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.931793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.932130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.932139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.932331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.932340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.932619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.932630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.932820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.932835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.933189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.933198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.933539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.933549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.933875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.933884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.934213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.934223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.934546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.934555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.934877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.934888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.935222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.935232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.935552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.935564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.935894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.935903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.936224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.936233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.936575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.936584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.936820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.936830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.937171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.937181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.937508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.937517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.937840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.937851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.938180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.938189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.938514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.938523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.938866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.938875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.939208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.939518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.939528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.939851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.939859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.940143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.940150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.940458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.940469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.940809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.940819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.941929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.941962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.942310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.285 [2024-11-20 06:45:44.942322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.285 qpair failed and we were unable to recover it. 00:34:25.285 [2024-11-20 06:45:44.942629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.942638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.942848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.942857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.943200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.943209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.943533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.943541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.943847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.943856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.944260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.944268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.944478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.944489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.944812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.944826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.945171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.945180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.945468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.945477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.945804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.945813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.946160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.946169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.946365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.946377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.946594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.946603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.946922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.946934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.947253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.947264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.947565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.947573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.947919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.947927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.948233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.948242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.948534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.948542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.948840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.948848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.949179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.949188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.949506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.949517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.949835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.949846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.950880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.950911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.951234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.951245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.951531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.951541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.951861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.951871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.952200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.952210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.952522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.952529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.952869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.952879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.953216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.953224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.953547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.953557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.953900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.953910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.954108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.954122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.954495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.954503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.954786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.954795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.955116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.955127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.286 qpair failed and we were unable to recover it. 00:34:25.286 [2024-11-20 06:45:44.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.286 [2024-11-20 06:45:44.955463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.955783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.955795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.956143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.956153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.956325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.956334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.956754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.956763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.957171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.957178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.957525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.957533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.957922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.957933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.958230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.958238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.958607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.958616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.958876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.958885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.959200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.959209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.959535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.959543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.959847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.959855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.960176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.960183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.960524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.960534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.960867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.960877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.961194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.961202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.961520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.961528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.961853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.961863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.962145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.962153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.962328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.962336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.962656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.962666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.962957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.962966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.963294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.963302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.963654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.963663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.963872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.963881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.964198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.964205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.964528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.964536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.964874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.964885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.965232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.965243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.965560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.965572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.965901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.965910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.966234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.966242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.966620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.966628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.966843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.966851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.967181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.967190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.967581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.967591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.287 [2024-11-20 06:45:44.967914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.287 [2024-11-20 06:45:44.967926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.287 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.968272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.968282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.968600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.968607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.968775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.968784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.969173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.969184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.969509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.969517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.969832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.969840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.970046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.970056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.970422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.970431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.970743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.970755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.971105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.971112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.971426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.971441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.971786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.971794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.972124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.972131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.972474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.972482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.972795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.972804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.973136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.973144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.973468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.973475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.973673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.973680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.974026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.974034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.974357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.974367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.974666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.974674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.974998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.975008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.975329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.975337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.975670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.975679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.976001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.976010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.976316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.976326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.976654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.976662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.976975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.976985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.977314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.977324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.977633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.977643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.977968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.977977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.978324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.978333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.978659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.978668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.978890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.978900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.979213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.979220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.979542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.979552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.979830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.979839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.980177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.288 [2024-11-20 06:45:44.980185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.288 qpair failed and we were unable to recover it. 00:34:25.288 [2024-11-20 06:45:44.980512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.980519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.980819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.980827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.981137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.981144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.981483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.981490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.981819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.981829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.982223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.982232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.982418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.982427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.982710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.982719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.983048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.983058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.983296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.983304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.983621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.983630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.983921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.983932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.984129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.984138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.984469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.984478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.984796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.984805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.985125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.985134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.985460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.985468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.985800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.985810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.986045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.986054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.986392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.986404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.986721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.986730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.987062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.987070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.987399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.987406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.987685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.987694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.988014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.988021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.988235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.988243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.988571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.988585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.988905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.988914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.989270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.989280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.989550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.989558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.989886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.989894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.990212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.990220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.990539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.990546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.990878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.990888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.991217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.991227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.991430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.991439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.991749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.991759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.991986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.991995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.289 [2024-11-20 06:45:44.992326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.289 [2024-11-20 06:45:44.992335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.289 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.992641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.992648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.992974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.992984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.993311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.993319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.993635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.993652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.993879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.993886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.994106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.994114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.994506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.994514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.994872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.994880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.995124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.995131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.995448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.995455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.995643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.995653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.995958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.995967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.996289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.996298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.996629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.996638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.996935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.996944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.997172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.997181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.997523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.997531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.997720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.997731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.998068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.998078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.998301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.998309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.998626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.998635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.998933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.998941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.999262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.999270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.999585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.999594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:44.999954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:44.999963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.000269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.000279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.000490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.000497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.000792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.000801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.001122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.001130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.001320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.001328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.001654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.001661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.001992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.002000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.002323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.002335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.290 [2024-11-20 06:45:45.002656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.290 [2024-11-20 06:45:45.002665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.290 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.002976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.002984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.003395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.003402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.003689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.003697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.004031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.004039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.004365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.004373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.004699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.004706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.004934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.004944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.005133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.005142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.005484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.005492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.005821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.005829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.006201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.006209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.006411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.006419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.006761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.006771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.007089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.007098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.007407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.007416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.007776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.007786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.008099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.008107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.008436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.008443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.008760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.008768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.009067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.009076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.009427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.009437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.009765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.009776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.010149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.010157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.010461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.010470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.010795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.010804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.011134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.011142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.011467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.011474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.011802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.011812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.012129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.012137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.012328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.012337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.012594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.012602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.012925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.012932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.013253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.013260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.013587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.013594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.013890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.013898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.014037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.291 [2024-11-20 06:45:45.014045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.291 qpair failed and we were unable to recover it. 00:34:25.291 [2024-11-20 06:45:45.014257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3cf30 is same with the state(6) to be set 00:34:25.291 Read completed with error (sct=0, sc=8) 00:34:25.291 starting I/O failed 00:34:25.291 Read completed with error (sct=0, sc=8) 00:34:25.291 starting I/O failed 00:34:25.291 Read completed with error (sct=0, sc=8) 00:34:25.291 starting I/O failed 00:34:25.291 Read completed with error (sct=0, sc=8) 00:34:25.291 starting I/O failed 00:34:25.291 Read completed with error (sct=0, sc=8) 00:34:25.291 starting I/O failed 00:34:25.291 Read completed with error (sct=0, sc=8) 00:34:25.291 starting I/O failed 00:34:25.291 Read completed with error (sct=0, sc=8) 00:34:25.291 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Read completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 Write completed with error (sct=0, sc=8) 00:34:25.292 starting I/O failed 00:34:25.292 [2024-11-20 06:45:45.015462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:25.292 [2024-11-20 06:45:45.016037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.016159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.016542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.016553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.017015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.017073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.017438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.017448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.017782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.017790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.018108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.018116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.018341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.018350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.018585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.018592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.018932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.018942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.019268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.019276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.019510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.019518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.019866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.019875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.020207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.020216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.020423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.020431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.020753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.020762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.021130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.021141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.021457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.021465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.021681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.021689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.022009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.022021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.022354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.022370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.022769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.022779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.023105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.023115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.023433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.023442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.023765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.023777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.023951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.023961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.024287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.024297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.024621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.024630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.024920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.024929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.025270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.292 [2024-11-20 06:45:45.025278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.292 qpair failed and we were unable to recover it. 00:34:25.292 [2024-11-20 06:45:45.025462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.025472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.025802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.025811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.026123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.026132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.026494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.026505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.026862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.026871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.027213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.027224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.027577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.027586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.027917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.027926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.028139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.028150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.028499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.028508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.028822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.028830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.029173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.029180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.029496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.029503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.029814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.029823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.030055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.030062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.030397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.030407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.030759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.030770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.031101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.031109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.031426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.031434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.031758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.031767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.032091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.032098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.032420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.032428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.032758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.032768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.033061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.033069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.033386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.033394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.033713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.033721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.033912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.033921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.034305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.034312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.034650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.034658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.034969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.034977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.035305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.035313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.035632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.035641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.035970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.035981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.036300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.036308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.036629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.036636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.036969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.036977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.037287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.037294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.037618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.037628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.293 [2024-11-20 06:45:45.037923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.293 [2024-11-20 06:45:45.037933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.293 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.038260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.038269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.038587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.038596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.038813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.038822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.039131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.039138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.039465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.039473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.039797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.039807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.040112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.040121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.040447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.040454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.040787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.040795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.041114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.041122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.041460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.041468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.041635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.041644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.041874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.041884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.042118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.042126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.042467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.042477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.042792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.042802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.043136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.043143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.043465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.043473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.043830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.043839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.044055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.044062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.044398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.044405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.044730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.044739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.045058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.045066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.045290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.045297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.045591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.045598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.045899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.045907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.046238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.046247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.046587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.046597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.046932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.046940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.047250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.047257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.047590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.047597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.047925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.047933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.048260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.048267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.048591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.048598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.294 [2024-11-20 06:45:45.048920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.294 [2024-11-20 06:45:45.048931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.294 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.049245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.049254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.049351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.049359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.049646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.049653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.049998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.050005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.050322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.050329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.050645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.050653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.050995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.051004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.051330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.051340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.051665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.051673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.051980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.051990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.052311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.052320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.052641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.052648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.053032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.053361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.053369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.053580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.053589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.053923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.053933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.054257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.054266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.054573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.054581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.054921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.054929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.055247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.055255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.055584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.055591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.055900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.055910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.056229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.056236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.056554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.056562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.056875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.056884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.057225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.057233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.057440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.057450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.057767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.057776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.058025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.058032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.058234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.058242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.058444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.058455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.058788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.058796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.059137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.059145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.059440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.059447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.059775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.059784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.060138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.060145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.060317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.060326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.060683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.295 [2024-11-20 06:45:45.060690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.295 qpair failed and we were unable to recover it. 00:34:25.295 [2024-11-20 06:45:45.061033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.061042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.061349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.061357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.061577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.061586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.061873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.061882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.062177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.062184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.062518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.062525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.062817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.062825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.063148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.063156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.063488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.063497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.063819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.063828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.064165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.064172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.064487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.064495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.064864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.064872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.065162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.065170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.065509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.065520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.065816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.065826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.066011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.066019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.066317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.066324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.066658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.066665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.066986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.066994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.067323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.067329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.067644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.067654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.067971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.067981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.068317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.068324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.068649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.068657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.068861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.068869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.069231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.069239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.069550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.069558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.069901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.069908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.070224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.070234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.070544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.070554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.070895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.070904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.071274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.071282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.071612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.071620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.071962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.071971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.072289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.072297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.072621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.072631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.072824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.072835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.296 qpair failed and we were unable to recover it. 00:34:25.296 [2024-11-20 06:45:45.073174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.296 [2024-11-20 06:45:45.073182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.073513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.073521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.073693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.073701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.074027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.074036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.074357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.074364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.074689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.074698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.074987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.074995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.075328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.075336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.075649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.075657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.075982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.075990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.076313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.076322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.076640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.076649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.076968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.076978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.077293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.077301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.077629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.077639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.077972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.077980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.078311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.078320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.078660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.078667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.078984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.078994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.079353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.079361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.079668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.079679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.080022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.080031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.080354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.080364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.080704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.080713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.081006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.081013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.081336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.081343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.081670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.081679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.081984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.081993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.082400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.082409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.082644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.082653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.083009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.083017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.083205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.083213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.083555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.083564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.083779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.083788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.084116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.084125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.084439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.084460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.084933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.085037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.085490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.085526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.085998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.297 [2024-11-20 06:45:45.086056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.297 qpair failed and we were unable to recover it. 00:34:25.297 [2024-11-20 06:45:45.086329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.086341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.086656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.086664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.087013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.087022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.087344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.087351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.087676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.087683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.087983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.087991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.088323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.088330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.088522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.088533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.088903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.088912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.089098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.089107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.089494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.089501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.089832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.089840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.090172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.090180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.090507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.090515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.090840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.090848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.091192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.091199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.091555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.091565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.091872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.091881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.092109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.092116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.092448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.092456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.092817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.092829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.093066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.093074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.093246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.093254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.093580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.093590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.093777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.093785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.094113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.094123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.094367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.094376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.094696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.094703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.095029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.095037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.095358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.095365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.095673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.095681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.096005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.096014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.096264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.096272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.096632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.096639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.096882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.096890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.097259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.097267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.097583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.097592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.097804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.298 [2024-11-20 06:45:45.097812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.298 qpair failed and we were unable to recover it. 00:34:25.298 [2024-11-20 06:45:45.098168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.098175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.098505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.098516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.098849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.098858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.099185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.099195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.099527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.099535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.099865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.099873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.100088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.100096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.100390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.100397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.100566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.100573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.100796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.100808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.101102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.101110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.101406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.101413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.101738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.101752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.102068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.102076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.102317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.102326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.102660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.102667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.102988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.103006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.103391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.103399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.103721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.103729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.104131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.104139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.104450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.104458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.104774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.104782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.104977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.104986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.105281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.105290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.105485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.105804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.105825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.106157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.106165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.106489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.106497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.106803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.106811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.107027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.107034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.107324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.107332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.107668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.107675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.108010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.108019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.108267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.108274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.299 [2024-11-20 06:45:45.108584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.299 [2024-11-20 06:45:45.108592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.299 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.108931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.108939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.109160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.109167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.109362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.109370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.109722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.109732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.110132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.110142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.110463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.110471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.110793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.110801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.111178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.111185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.111516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.111523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.111850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.111857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.112193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.112201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.112526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.112537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.112929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.112938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.113233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.113243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.113554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.113562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.113889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.113900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.114204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.114211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.114533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.114542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.114942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.114953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.115171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.115180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.115516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.115524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.118180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.118239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.118590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.118600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.119066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.119124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.119478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.119487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.119951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.120007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.120371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.120387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.120719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.120729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.121062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.121072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.121416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.121425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.121813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.121822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.122132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.122140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.122362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.122373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.122680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.122690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.123019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.123029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.123396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.123432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.123770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.123789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.124144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.300 [2024-11-20 06:45:45.124156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.300 qpair failed and we were unable to recover it. 00:34:25.300 [2024-11-20 06:45:45.124506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.124514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.124734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.124764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.125115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.125123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.125448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.125457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.125809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.125819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.126073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.126081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.126276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.126284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.126624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.126631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.126834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.126842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.127178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.127187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.127473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.127481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.127801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.128107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.128116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.128305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.128315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.128656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.128665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.128958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.128966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.129287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.129297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.129640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.129650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.129973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.129983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.130309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.130318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.130663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.130670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.130912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.130922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.131214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.131222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.131555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.131565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.131922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.131932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.132231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.132430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.132440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.132773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.132783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.133131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.133138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.133435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.133443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.133807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.133819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.134169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.134179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.134369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.134377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.134705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.134714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.135043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.135053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.135380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.135388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.135740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.135757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.136106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.136115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.301 [2024-11-20 06:45:45.136431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.301 [2024-11-20 06:45:45.136441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.301 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.136764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.136773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.137066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.137075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.137401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.137409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.137717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.137725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.138108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.138116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.138505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.138513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.138826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.138837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.139166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.139175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.139488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.139496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.139819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.139827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.140142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.140150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.140343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.140353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.140741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.140757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.141096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.141107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.141433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.141445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.141784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.141793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.142004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.142012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.142345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.142354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.142664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.142673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.142907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.142917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.143242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.143251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.143543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.143551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.143770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.143789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.144093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.144102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.144424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.144433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.144749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.144758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.144992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.145001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.145327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.145337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.145664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.145672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.146001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.146014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.146395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.146404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.146755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.146766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.147121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.147130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.147442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.147452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.147687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.147697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.147988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.148000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.148328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.148341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.148686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.302 [2024-11-20 06:45:45.148699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.302 qpair failed and we were unable to recover it. 00:34:25.302 [2024-11-20 06:45:45.149033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.149046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.149387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.149716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.149726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.150057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.150068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.150379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.150388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.150715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.150723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.151062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.151072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.151412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.151421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.151773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.151783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.152033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.152046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.152362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.152372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.152548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.152561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.152674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.152683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.152982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.152993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.153324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.153332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.153636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.153646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.153939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.153949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.154167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.154175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.154448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.154456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.154780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.154790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.155142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.155151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.155474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.155484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.156648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.156683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.157044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.157058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.157417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.157425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.157769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.157778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.158119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.158129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.158451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.158460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.158812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.158821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.159141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.159150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.159557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.159569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.159773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.159783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.160131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.160140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.160482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.160492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.160697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.160706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.303 qpair failed and we were unable to recover it. 00:34:25.303 [2024-11-20 06:45:45.161010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.303 [2024-11-20 06:45:45.161021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.161345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.161359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.161719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.161731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.162086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.162098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.162295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.162304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.162646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.162657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.162954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.162966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.163309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.163320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.163641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.163650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.163952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.163963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.164159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.164170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.164505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.164515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.164830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.164841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.165185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.165194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.165398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.165407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.165619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.165629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.165922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.165932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.166271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.166282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.166639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.166651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.166967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.166977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.167320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.167328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.167529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.167537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.167820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.167829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.168212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.168220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.168511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.168719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.168728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.169056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.169065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.169367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.169376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.169701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.169995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.170005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.170328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.170337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.170663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.170674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.170968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.170979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.171303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.171312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.171498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.171507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.171845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.171854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.172213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.172222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.304 [2024-11-20 06:45:45.172417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.304 [2024-11-20 06:45:45.172425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.304 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.172755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.172765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.173109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.173117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.173419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.173429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.173753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.173763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.174160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.174171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.174515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.174522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.174834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.174843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.175173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.175181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.175386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.175396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.175757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.175767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.176135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.176142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.176463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.176471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.176804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.176812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.177139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.177146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.177473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.177483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.177819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.177829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.178757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.178790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.179101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.179111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.179344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.179353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.179685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.179693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.179933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.179942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.180198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.180208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.180418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.180428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.180756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.180765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.180963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.180974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.181384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.181687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.181698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.181991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.182001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.182335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.182344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.182668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.182675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.182994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.183004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.305 [2024-11-20 06:45:45.183339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.305 [2024-11-20 06:45:45.183350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.305 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.183657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.183667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.183884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.183897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.184960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.184993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.185365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.185376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.186310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.186339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.186642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.186654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.186979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.186988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.187251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.187259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.187587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.187595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.187878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.187888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.188119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.188134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.188465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.188473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.188798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.188808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.189137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.189145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.189475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.189486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.189803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.189815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.190139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.190146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.190466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.190474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.190797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.190811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.191023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.191031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.191322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.191330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.191655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.191663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.191994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.192004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.192234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.192242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.192336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.192342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.192429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.192440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.192758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.192768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.193081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.193092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.193450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.193459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.193715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.580 [2024-11-20 06:45:45.193723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.580 qpair failed and we were unable to recover it. 00:34:25.580 [2024-11-20 06:45:45.194038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.194047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.194217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.194226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.194575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.194584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.194909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.194918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.195169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.195176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.195583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.195591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.195908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.195916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.196260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.196268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.196585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.196594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.196910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.196918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.197243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.197252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.197461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.197471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.197826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.197833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.198229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.198237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.198594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.198602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.198918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.198927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.199286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.199295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.199510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.199517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.199832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.199841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.200167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.200175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.200353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.200361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.200768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.200777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.201108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.201117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.201469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.201477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.201793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.201801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.202133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.202140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.202468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.202475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.202808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.202816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.203149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.203157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.203486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.203493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.203808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.203816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.204154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.204161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.204484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.204492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.204690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.204697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.204924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.204932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.205239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.205246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.205390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.205397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.205665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.205674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.205997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.206006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.206170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.206178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.206437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.206444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.206647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.581 [2024-11-20 06:45:45.206654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.581 qpair failed and we were unable to recover it. 00:34:25.581 [2024-11-20 06:45:45.206858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.206866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.207203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.207210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.207535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.207542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.207876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.207884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.208229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.208237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.208556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.208563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.208878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.208887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.209211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.209219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.209618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.209627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.209912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.209920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.210253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.210270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.210592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.210600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.210924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.210932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.211144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.211151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.211469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.211476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.211665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.211673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.212022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.212030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.212355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.212362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.212687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.212694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.212995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.213003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.213312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.213320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.213650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.213657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.213928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.213936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.214166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.214175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.214493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.214500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.214823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.214832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.215164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.215171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.215290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.215297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.215582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.215589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.215958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.215965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.216291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.216298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.216635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.216642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.216844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.216852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.217214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.217221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.217556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.217564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.217892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.217900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.218314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.218323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.218649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.218657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.218977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.218984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.219306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.219314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.582 qpair failed and we were unable to recover it. 00:34:25.582 [2024-11-20 06:45:45.219518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.582 [2024-11-20 06:45:45.219526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.219923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.219931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.220212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.220220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.220550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.220558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.220882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.220889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.221213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.221221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.221539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.221547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.221874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.221882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.222215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.222223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.222522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.222530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.222858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.222866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.223191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.223199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.223526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.223533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.223860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.223868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.224203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.224211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.224538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.224546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.224841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.224849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.225069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.225077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.225380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.225389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.225704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.225711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.226023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.226031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.226360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.226367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.226696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.226703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.227004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.227012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.227332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.227340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.227652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.227659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.227973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.227981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.228193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.228201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.228526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.228533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.228829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.583 [2024-11-20 06:45:45.228837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.583 qpair failed and we were unable to recover it. 00:34:25.583 [2024-11-20 06:45:45.229078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.229085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.229422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.229430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.229712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.229719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.229917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.229925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.230312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.230319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.230628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.230635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.230926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.230934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.231252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.231259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.231457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.231466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.231676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.231684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.231977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.231985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.232277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.232284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.232612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.232621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.232934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.232941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.233246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.233253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.233581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.233588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.233761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.233769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.234176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.234183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.234503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.234510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.234731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.234738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.235026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.235033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.235369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.235377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.235717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.235725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.236049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.236058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.236378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.236385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.236703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.236711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.237085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.237093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.237402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.237410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.237739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.237750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.238065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.238072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.238398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.238405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.584 qpair failed and we were unable to recover it. 00:34:25.584 [2024-11-20 06:45:45.238810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.584 [2024-11-20 06:45:45.238820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.239196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.239203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.239530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.239537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.239773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.239782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.240125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.240132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.240426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.240434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.240646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.240653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.240928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.240936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.241144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.241151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.241506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.241513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.241818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.241827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.242155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.242163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.242473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.242481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.242808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.242815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.243132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.243140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.243424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.243431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.243739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.243753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.243955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.243963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.244151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.244159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.244335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.244343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.244567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.244574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.244881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.244890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.245261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.245268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.245588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.245595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.245920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.245928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.246091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.246099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.246422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.246440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.246761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.246769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.247099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.247107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.247462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.247469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.247800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.247808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.248139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.248147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.248471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.248478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.248784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.248792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.249097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.249105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.249419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.249428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.249755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.249765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.250092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.250099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.250422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.250430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.250770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.250777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.250990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.250998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.251206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.251214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.251550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.251558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.251882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.251890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.252229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.252238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.252436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.252444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.252734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.252741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.252962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.252969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.253358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.253365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.253708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.253715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.253930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.253939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.254258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.254265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.254587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.254594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.254916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.254924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.255237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.255245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.255564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.255571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.255929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.255937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.256266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.256273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.256583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.256591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.256917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.256925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.257254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.585 [2024-11-20 06:45:45.257262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.585 qpair failed and we were unable to recover it. 00:34:25.585 [2024-11-20 06:45:45.257584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.257592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.257988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.257997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.258399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.258408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.258604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.258613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.258938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.258946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.259286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.259294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.259580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.259587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.259917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.259934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.260253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.260260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.260584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.260592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.260915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.260924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.261327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.261336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.261426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.261434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.261735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.261742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.262079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.262087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.262407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.262414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.262623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.262631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.262954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.262962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.263180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.263187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.263534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.263541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.263878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.263886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.264222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.264229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.264544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.264552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.264878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.264886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.265100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.265108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.265330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.265338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.265702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.265709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.266113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.266121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.266493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.266500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.266822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.266830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.267149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.267156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.267560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.267567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.267863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.267871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.268193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.268200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.268540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.268548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.268871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.268879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.269197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.269205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.269523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.269530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.269861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.269870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.270196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.270203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.270533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.270541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.270867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.270874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.271191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.271199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.271559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.271566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.271887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.271895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.272136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.272143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.272469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.272477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.272802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.272810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.273132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.273140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.273330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.273337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.273674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.273681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.274071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.274080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.274369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.274377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.274686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.274693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.274866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.274874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.275209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.275218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.275540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.275548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.275870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.275878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.276210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.276217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.276507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.276514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.276845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.276852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.277114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.277121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.277442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.277450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.277762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.586 [2024-11-20 06:45:45.277770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.586 qpair failed and we were unable to recover it. 00:34:25.586 [2024-11-20 06:45:45.277969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.277976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.278264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.278272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.278604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.278611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.278926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.278933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.279256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.279264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.279588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.279596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.279922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.279931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.280256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.280264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.280574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.280581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.280905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.280914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.281232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.281240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.281568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.281576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.282011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.282019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.282374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.282381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.282701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.282710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.283034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.283041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.283363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.283370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.283695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.283703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.284045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.284053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.284395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.284402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.284718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.284725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.285064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.285071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.285387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.285394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.285731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.285740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.286073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.286081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.286383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.286391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.286712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.286720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.286960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.286967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.287258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.287266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.287471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.287479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.287798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.287806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.288138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.288146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.288477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.288484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.288812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.288820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.288933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.288940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.289288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.289295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.289525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.289532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.289858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.289866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.290196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.290203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.290524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.290532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.290851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.290861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.291190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.291199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.291545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.291554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.291883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.291891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.292198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.292205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.292528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.292535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.292859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.292868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.293184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.293191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.293593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.293600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.293886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.293894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.294262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.294269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.294511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.294519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.294737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.294748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.295085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.295092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.295399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.295406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.295764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.295775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.296129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.296136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.296443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.296450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.587 [2024-11-20 06:45:45.296775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.587 [2024-11-20 06:45:45.296783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.587 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.297115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.297124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.297440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.297447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.297862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.298186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.298193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.298372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.298380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.298742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.298754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.299078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.299085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.299405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.299412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.299739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.299752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.300079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.300086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.300405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.300414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.300729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.300736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.301046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.301054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.301371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.301378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.301704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.301712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.301918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.301926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.302213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.302220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.302544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.302551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.302877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.302884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.303089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.303098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.303318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.303325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.303692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.303700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.304024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.304032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.304323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.304332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.304656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.304663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.304982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.304991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.305334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.305342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.305663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.305670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.305958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.305966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.306297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.306304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.306616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.306625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.306964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.306972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.307289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.307297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.307620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.307629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.307930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.307939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.308344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.308352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.308662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.308671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.308996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.309004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.309318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.309326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.309646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.309654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.309977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.309985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.310392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.310399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.310588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.310595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.310917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.310925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.311147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.311154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.311484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.311492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.311814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.311822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.312116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.312124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.312410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.312417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.312738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.312751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.313046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.313053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.313372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.313380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.313703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.313710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.314024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.314032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.314365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.314373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.314706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.314714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.315048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.315058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.315376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.315383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.315713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.315721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.315800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.315807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.316090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.316097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.316293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.316301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.316681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.316689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.317016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.588 [2024-11-20 06:45:45.317024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.588 qpair failed and we were unable to recover it. 00:34:25.588 [2024-11-20 06:45:45.317348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.317357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.317553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.317561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.317878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.317885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.318208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.318216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.318588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.318595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.318918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.318926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.319251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.319258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.319504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.319511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.319832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.319840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.320261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.320268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.320581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.320589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.320804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.320812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.321194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.321201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.321394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.321401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.321761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.321769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.322111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.322119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.322396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.322403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.322735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.322742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.323098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.323106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.323433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.323441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.323725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.323732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.324029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.324037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.324362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.324369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.324694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.324702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.325025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.325033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.325437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.325446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.325656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.325663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.326002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.326010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.326331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.326338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.326656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.326664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.326981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.327304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.327311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.327629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.327637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.327980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.327987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.328288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.328295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.328619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.328627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.328824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.328833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.329061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.329068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.329393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.329401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.329724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.329733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.330047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.330056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.330376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.330385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.330586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.330595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.330911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.330918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.331244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.331251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.331581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.331588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.331785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.331793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.332123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.332131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.332453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.332460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.332786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.332794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.333129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.333136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.333343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.333350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.333548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.333555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.333878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.333886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.334192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.334200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.334370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.334377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.334759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.334766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.335046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.335054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.335368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.335375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.335781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.335791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.336144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.336151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.336358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.336366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.336726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.336733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.337096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.589 [2024-11-20 06:45:45.337105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.589 qpair failed and we were unable to recover it. 00:34:25.589 [2024-11-20 06:45:45.337427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.337434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.337724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.337731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.338055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.338063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.338454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.338461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.338802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.338812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.338900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.338908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.339254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.339261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.339572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.339579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.339819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.339827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.340143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.340151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.340376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.340384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.340723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.340730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.341066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.341074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.341287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.341294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.341572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.341579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.341906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.341913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.342253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.342261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.342585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.342592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.342903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.342911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.343106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.343114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.343416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.343423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.343756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.343763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.344052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.344060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.344387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.344395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.344707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.344715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.345035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.345043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.345422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.345431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.345740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.345752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.346068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.346075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.346385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.346393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.346717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.346725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.347050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.347058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.347375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.347383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.347694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.347702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.348017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.348025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.348236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.348245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.348586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.348593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.348905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.348914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.349247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.349255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.349584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.349592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.349915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.349924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.350248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.350255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.350589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.350596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.350923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.350933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.351240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.351247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.351535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.351543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.351873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.351881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.352199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.352207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.352523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.352530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.352853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.352861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.353192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.353200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.353523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.353531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.353878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.353885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.354198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.354206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.354531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.354539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.354848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.354857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.355179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.355188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.355510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.355518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.355762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.355771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.590 [2024-11-20 06:45:45.356089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.590 [2024-11-20 06:45:45.356098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.590 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.356343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.356352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.356671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.356679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.356990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.356999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.357324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.357331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.357741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.357753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.358166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.358174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.358476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.358483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.358688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.358696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.358742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.358757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.359062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.359070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.359394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.359402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.359617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.359624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.359913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.359923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.360249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.360256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.360477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.360485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.360831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.360839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.361167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.361175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.361514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.361521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.361898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.361907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.362232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.362240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.362562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.362894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.362903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.363236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.363243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.363569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.363578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.363925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.363934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.364263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.364271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.364635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.364643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.364967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.364976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.365295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.365303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.365611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.365619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.365960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.365967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.366286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.366294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.366625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.366634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.366839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.366849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.367169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.367176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.367530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.367539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.367901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.367911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.368110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.368118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.368365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.368373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.368590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.368598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.368890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.368899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.369242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.369250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.369577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.369586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.369912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.370241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.370249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.370573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.370581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.370902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.370910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.371235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.371243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.371450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.371458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.371801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.371810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.372032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.372039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.372317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.372326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.372490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.372500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.372851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.372861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.373179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.373189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.373506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.373515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.373731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.373741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.374013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.374021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.374336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.374343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.374668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.374677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.375019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.375028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.375350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.375359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.375572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.591 [2024-11-20 06:45:45.375583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.591 qpair failed and we were unable to recover it. 00:34:25.591 [2024-11-20 06:45:45.375789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.375797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.376047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.376055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.376292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.376301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.376624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.376634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.376962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.376971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.377306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.377314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.377508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.377518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.377786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.377796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.378045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.378379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.378388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.378609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.378616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.378935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.378944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.379231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.379239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.379552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.379563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.379965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.379974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.380272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.380280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.380602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.380612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.380922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.380932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.381253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.381261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.381629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.381637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.381962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.381970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.382272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.382280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.382611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.382620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.382963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.382971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.383258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.383266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.383588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.383595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.383842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.383851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.384184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.384192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.384512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.384521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.384808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.384816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.385147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.385154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.385331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.385341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.385667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.385675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.385983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.385991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.386308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.386315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.386638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.386646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.386949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.386957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.387243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.387251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.387574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.387583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.387805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.387814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.388239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.388246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.388597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.388605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.388920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.388930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.389264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.389271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.389600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.389610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.389953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.389961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.390282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.390291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.390618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.390959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.390968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.391288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.391298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.391624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.391633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.391935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.391944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.392268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.392278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.392599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.392608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.392920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.392930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.393259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.393269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.393638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.393649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.393938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.393948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.394268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.394281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.394608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.394619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.394920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.394930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.395118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.395127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.592 qpair failed and we were unable to recover it. 00:34:25.592 [2024-11-20 06:45:45.395473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.592 [2024-11-20 06:45:45.395483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.395807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.395816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.396159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.396167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.396484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.396491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.396779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.396787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.397151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.397159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.397444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.397451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.397784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.397793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.397971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.397980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.398298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.398308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.398640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.398649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.398978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.398988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.399316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.399323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.399649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.399659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.399998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.400006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.400314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.400322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.400594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.400604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.400923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.400933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.401258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.401267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.401458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.401468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.401833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.401844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.402161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.402170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.402476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.402485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.402799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.402810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.403130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.403138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.403339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.403348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.403535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.403543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.403839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.403848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.404190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.404198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.404517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.404525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.404857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.404866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.405187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.405197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.405526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.405535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.405861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.405869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.406211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.406219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.406548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.406556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.406777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.407153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.407162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.407480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.407488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.407788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.407798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.408184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.408193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.408349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.408357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.408682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.408689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.409103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.409112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.409456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.409466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.409782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.409792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.410137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.410146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.410432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.410440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.410764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.410773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.411170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.411179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.411508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.411515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.411817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.411826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.412053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.412061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.412384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.412393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.412715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.412723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.412913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.412921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.413253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.413261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.413581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.413589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.413915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.413923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.593 [2024-11-20 06:45:45.414133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.593 [2024-11-20 06:45:45.414141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.593 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.414480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.414489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.414802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.414810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.415009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.415016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.415362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.415371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.415722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.415733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.416047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.416057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.416395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.416403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.416707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.416715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.417031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.417039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.417364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.417373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.417697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.417706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.417994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.418003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.418287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.418296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.418613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.418621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.418924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.418933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.419245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.419254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.419465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.419475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.419823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.419832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.420154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.420163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.420358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.420368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.420743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.420758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.421084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.421094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.421413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.421422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.421739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.421754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.422096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.422103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.422411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.422419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.422742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.422758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.423075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.423082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.423409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.423417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.423751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.423761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.423950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.423960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.424229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.424238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.424558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.424566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.424894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.424903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.425298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.425307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.425658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.425666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.425848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.425857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.426195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.426204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.426532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.426541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.426709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.426719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.427032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.427042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.427370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.427377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.427574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.427583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.427951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.427961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.428296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.428306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.428592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.428600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.428921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.428929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.429237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.429247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.429568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.429577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.429896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.430235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.430243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.430568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.430577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.430896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.430905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.431224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.431232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.431557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.431567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.431883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.431891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.432242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.432252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.432544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.432554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.432869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.432880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.433215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.433226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.433532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.433541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.433829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.433840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.434038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.594 [2024-11-20 06:45:45.434047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.594 qpair failed and we were unable to recover it. 00:34:25.594 [2024-11-20 06:45:45.434370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.434379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.434705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.434716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.435030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.435039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.435226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.435234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.435608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.435617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.435946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.435955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.436286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.436294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.436620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.436628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.436921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.436929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.437104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.437116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.437511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.437519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.437860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.437868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.438067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.438077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.438448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.438458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.438820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.438827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.439135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.439142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.439335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.439344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.439578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.439585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.439894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.439902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.440238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.440245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.440566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.440573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.440864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.440871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.441080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.441088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.441448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.441455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.441766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.441781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.442101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.442108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.442432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.442439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.442798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.442805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.443130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.443138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.443360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.443366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.443674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.443683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.444006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.444013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.444338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.444346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.444580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.444587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.444813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.444821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.445120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.445128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.445450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.445458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.445784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.445791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.446095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.446103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.446430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.446437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.446765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.446772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.447111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.447118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.447426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.447433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.447756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.447764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.448084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.448093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.448417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.448424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.448756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.448764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.449079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.449087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.449374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.449382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.449577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.449585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.449926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.449933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.450267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.450274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.450596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.450603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.450809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.450817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.451209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.451216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.451543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.451550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.451742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.451758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.452078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.452085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.452400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.452408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.452733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.452740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.452928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.452936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.453317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.595 [2024-11-20 06:45:45.453324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.595 qpair failed and we were unable to recover it. 00:34:25.595 [2024-11-20 06:45:45.453635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.453642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.453939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.453946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.454260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.454277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.454622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.454630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.454920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.454930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.455242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.455565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.455573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.455898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.455906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.456233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.456240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.456546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.456555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.456933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.456941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.457244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.457253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.457577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.457586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.457869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.457876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.458209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.458217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.458429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.458441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.458768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.458777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.458997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.459004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.459233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.459241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.459563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.459570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.459788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.459797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.460099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.460106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.460438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.460454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.460812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.460819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.461126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.461134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.461381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.461389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.461738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.461754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.462068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.462075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.462403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.462411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.462820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.462827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.463127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.463134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.463345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.463353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.463671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.463678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.464000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.464008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.464334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.464341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.464667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.464675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.465014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.465021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.465318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.465326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.465646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.465653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.465978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.465987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.466304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.466311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.466637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.466644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.466934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.466941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.467314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.467322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.467637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.467646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.467954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.467962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.468263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.468271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.468603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.468611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.468955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.468963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.469184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.469191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.469513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.469521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.469880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.469887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.470212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.470219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.470547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.470554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.470741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.470756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.471088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.471095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.471479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.471488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.471806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.471814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.472017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.472025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.596 [2024-11-20 06:45:45.472261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.596 [2024-11-20 06:45:45.472268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.596 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.472587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.472595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.472967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.472975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.473386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.473393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.473703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.473710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.474045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.474052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.474365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.474372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.474701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.474709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.475017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.475025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.475344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.475351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.475629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.475636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.475969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.475976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.476302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.476310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.476631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.476638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.476935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.476942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.477313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.477322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.477645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.477654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.478043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.478051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.478225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.478234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.478551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.478560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.478883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.478891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.479208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.479223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.479547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.479554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.479862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.479870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.480196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.480206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.480503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.480511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.480818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.480826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.481022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.481029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.481409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.481416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.481744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.481756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.597 [2024-11-20 06:45:45.482074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.597 [2024-11-20 06:45:45.482081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.597 qpair failed and we were unable to recover it. 00:34:25.877 [2024-11-20 06:45:45.482411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.877 [2024-11-20 06:45:45.482421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.877 qpair failed and we were unable to recover it. 00:34:25.877 [2024-11-20 06:45:45.482754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.877 [2024-11-20 06:45:45.482767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.877 qpair failed and we were unable to recover it. 00:34:25.877 [2024-11-20 06:45:45.482905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.877 [2024-11-20 06:45:45.482913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.877 qpair failed and we were unable to recover it. 00:34:25.877 [2024-11-20 06:45:45.483120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.877 [2024-11-20 06:45:45.483129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.877 qpair failed and we were unable to recover it. 00:34:25.877 [2024-11-20 06:45:45.483459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.483467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.483784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.483792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.484120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.484128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.484456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.484463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.484773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.484781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.485088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.485095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.485416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.485424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.485761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.485769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.486109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.486116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.486437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.486444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.486767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.486775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.486974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.486983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.487385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.487392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.487693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.487701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.487976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.487983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.488311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.488318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.488613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.488620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.488829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.488837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.489161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.489170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.489489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.489496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.489819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.489828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.490154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.490161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.490470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.490478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.490804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.490811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.491129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.491136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.491468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.491475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.491801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.491809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.492140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.492147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.492514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.492522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.492825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.492832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.493172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.493181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.493375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.493382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.493717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.493724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.494041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.494049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.494372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.494379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.494704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.494711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.495037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.495044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.495365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.878 [2024-11-20 06:45:45.495373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.878 qpair failed and we were unable to recover it. 00:34:25.878 [2024-11-20 06:45:45.495699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.495706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.496004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.496012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.496344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.496352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.496724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.496731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.496902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.496910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.497244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.497252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.497629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.497638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.497927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.497934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.498239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.498247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.498454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.498462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.498778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.498787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.499105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.499112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.499433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.499441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.499805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.499812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.500123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.500130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.500353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.500360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.500701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.500708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.501018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.501025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.501334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.501342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.501628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.501636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.501927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.501935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.502264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.502271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.502597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.502605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.502938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.502945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.503273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.503281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.503605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.503612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.503908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.503915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.504243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.504252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.504572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.504580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.504907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.504916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.505265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.505274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.505591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.505598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.505966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.505975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.506306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.506315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.506621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.506628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.506960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.506968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.507281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.507298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.507520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.879 [2024-11-20 06:45:45.507527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.879 qpair failed and we were unable to recover it. 00:34:25.879 [2024-11-20 06:45:45.507842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.507851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.508171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.508178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.508472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.508479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.508810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.508819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.509002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.509009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.509320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.509328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.509613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.509620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.509958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.509975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.510332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.510339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.510665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.510672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.510995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.511003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.511323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.511331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.511654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.511661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.511987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.511995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.512185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.512192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.512522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.512529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.512824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.512832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.513154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.513162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.513486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.513493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.513777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.513785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.513990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.513997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.514332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.514339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.514647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.514656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.514881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.514889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.515161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.515169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.515503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.515512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.515861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.515870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.516187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.516198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.516513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.516521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.516843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.516850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.517031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.517041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.517394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.517402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.517737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.517752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.518065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.518072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.518396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.518404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.518712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.518719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.519048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.519056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.519378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.519385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.880 [2024-11-20 06:45:45.519690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.880 [2024-11-20 06:45:45.519698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.880 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.520064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.520072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.520389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.520718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.520725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.521043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.521052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.521377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.521386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.521707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.521716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.522009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.522018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.522334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.522341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.522673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.522680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.522997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.523005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.523374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.523385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.523704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.523712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.523953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.523961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.524276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.524284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.524507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.524515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.524829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.524837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.525183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.525191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.525519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.525525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.525832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.525840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.526039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.526047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.526422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.526430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.526599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.526609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.526964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.526971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.527197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.527204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.527567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.527575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.527968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.527977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.528304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.528311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.528525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.528532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.528858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.528865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.529210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.529218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.529542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.529549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.881 [2024-11-20 06:45:45.529870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.881 [2024-11-20 06:45:45.529878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.881 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.530202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.530209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.530530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.530538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.530870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.530877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.531207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.531215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.531544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.531551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.531869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.531877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.532265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.532272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.532593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.532600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.532931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.532938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.533262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.533270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.533591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.533599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.533792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.533801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.534121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.534128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.534432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.534440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.534627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.534635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.534963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.534971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.535256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.535263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.535594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.535602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.535922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.535929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.536253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.536263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.536568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.536576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.536901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.536909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.537239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.537246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.537545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.537553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.882 qpair failed and we were unable to recover it. 00:34:25.882 [2024-11-20 06:45:45.537888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.882 [2024-11-20 06:45:45.537895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.538225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.538234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.538552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.538559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.538881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.538889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.539215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.539223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.539505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.539513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.539711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.539719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.540038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.540047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.540361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.540368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.540691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.540699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.540994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.541001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.541401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.541409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.541723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.541731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.542123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.542131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.542462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.542471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.542781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.542789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.543113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.543120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.543448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.543456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.543766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.543774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.543916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.543924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.544265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.544272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.544597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.544604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.544925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.544933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.545258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.545266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.545596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.545604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.545792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.545800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.546016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.546024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.546193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.546201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.546433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.546441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.546761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.546768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.547119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.547128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.547445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.547452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.883 [2024-11-20 06:45:45.547776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.883 [2024-11-20 06:45:45.547783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.883 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.548133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.548140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.548473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.548481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.548803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.548811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.549122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.549130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.549351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.549358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.549690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.549697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.550023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.550030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.550358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.550365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.550699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.550706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.550908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.550916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.551274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.551283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.551615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.551623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.551788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.551797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.552023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.552031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.552314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.552321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.552656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.552663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.552996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.553004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.553334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.553341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.553663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.553670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.553966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.553973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.554309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.554316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.554509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.554518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.554853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.554860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.555191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.555198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.555534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.555541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.555862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.555870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.556222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.556229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.884 [2024-11-20 06:45:45.556530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.884 [2024-11-20 06:45:45.556538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.884 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.556784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.556791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.557124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.557132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.557365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.557374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.557679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.557686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.558003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.558010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.558212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.558219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.558511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.558519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.558847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.558854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.559173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.559181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.559503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.559510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.559832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.559840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.560169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.560176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.560501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.560509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.560860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.560867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.561085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.561093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.561332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.561340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.561664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.561671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.561965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.561973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.562305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.562312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.562635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.562644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.562978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.562988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.563194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.563202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.563536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.563545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.563713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.563721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.564061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.564069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.564395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.564402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.564733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.564741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.565061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.885 [2024-11-20 06:45:45.565070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.885 qpair failed and we were unable to recover it. 00:34:25.885 [2024-11-20 06:45:45.565437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.565446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.565637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.565644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.565875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.565883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.566272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.566279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.566605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.566613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.566970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.566979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.567311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.567318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.567631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.567648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.567970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.567979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.568270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.568277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.568610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.568618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.568941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.568950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.569272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.569279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.569687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.569696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.570023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.570031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.570347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.570355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.570678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.570686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.570971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.570980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.571299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.571307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.571634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.571643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.571977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.571986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.572190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.572199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.572518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.572526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.572872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.572879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.573210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.573217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.573544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.573551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.573882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.573890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.574211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.886 [2024-11-20 06:45:45.574218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.886 qpair failed and we were unable to recover it. 00:34:25.886 [2024-11-20 06:45:45.574542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.574550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.574771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.574779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.575190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.575199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.575517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.575524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.575855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.575863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.576192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.576199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.576520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.576527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.576812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.576819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.577147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.577154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.577484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.577491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.577789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.577797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.577988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.577995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.578319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.578326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.578644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.578652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.578860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.578872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.579200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.579207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.579531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.579539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.579871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.579879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.580202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.580210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.580411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.580419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.580709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.580716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.581013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.581021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.581346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.581355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.581676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.581684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.581860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.581868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.582180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.582188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.582506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.582515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.582890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.582897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.583215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.583222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.583548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.583555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.583866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.583874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.584213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.584220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.584551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.584558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.584878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.584885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.585207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.585215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.585535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.585542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.887 qpair failed and we were unable to recover it. 00:34:25.887 [2024-11-20 06:45:45.585851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.887 [2024-11-20 06:45:45.585859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.586077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.586085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.586369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.586378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.586696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.586703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.586907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.586915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.587123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.587131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.587482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.587490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.587816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.587824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.588160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.588167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.588474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.588481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.588690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.588697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.589019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.589027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.589349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.589356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.589678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.589686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.590047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.590054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.590351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.590358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.590682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.590690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.591028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.591037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.591367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.591375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.591703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.591714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.592034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.592041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.592361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.592369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.592705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.592713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.593019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.593028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.593397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.593405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.593591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.593600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.593931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.593939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.594238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.594245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.594565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.594572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.594876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.594883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.595254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.595261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.595525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.595533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.595761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.595769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.888 [2024-11-20 06:45:45.596113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.888 [2024-11-20 06:45:45.596121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.888 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.596324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.596331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.596661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.596669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.596875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.596884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.597223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.597230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.597538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.597545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.597737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.597749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.597986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.597993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.598279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.598287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.598620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.598627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.598887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.598895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.599173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.599180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.599510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.599518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.599736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.599751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.600103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.600110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.600417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.600425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.600753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.600760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.601086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.601094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.601417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.601424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.601759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.601767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.602119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.602127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.602413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.602420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.602753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.602764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.603078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.603087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.603298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.603306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2905872 Killed "${NVMF_APP[@]}" "$@" 00:34:25.889 [2024-11-20 06:45:45.603647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.603655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.603979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.603987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:25.889 [2024-11-20 06:45:45.604316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.604325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:25.889 [2024-11-20 06:45:45.604677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.604686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.889 [2024-11-20 06:45:45.604927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.604936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:25.889 [2024-11-20 06:45:45.605248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.605257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.889 [2024-11-20 06:45:45.605469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.605487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.605811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.605819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.606147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.606154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.606476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.606483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.606811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.606820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.889 qpair failed and we were unable to recover it. 00:34:25.889 [2024-11-20 06:45:45.607136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.889 [2024-11-20 06:45:45.607144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.607386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.607394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.607689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.607697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.607990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.608000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.608366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.608373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.608681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.608689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.609039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.609047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.609355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.609363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.609544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.609554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.609872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.609880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.610207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.610220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.610531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.610542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.610825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.610835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.611155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.611162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.611503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.611511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.611646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.611660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.611989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.611997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.612358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.612375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.612558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.612567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.612738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.612750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.613124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.613131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.613406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.613413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.613701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.613708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.614009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.614019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2906905 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2906905 00:34:25.890 [2024-11-20 06:45:45.614349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.614365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.614695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.614705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2906905 ']' 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:25.890 [2024-11-20 06:45:45.614921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.614931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.890 [2024-11-20 06:45:45.615131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.615144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:25.890 [2024-11-20 06:45:45.615474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.615486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:25.890 [2024-11-20 06:45:45.615895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.615907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 06:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.890 [2024-11-20 06:45:45.616199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.616212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.616510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.616523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.616865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.616875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.617209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.890 [2024-11-20 06:45:45.617219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.890 qpair failed and we were unable to recover it. 00:34:25.890 [2024-11-20 06:45:45.617528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.617536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.617874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.617882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.618101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.618109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.618408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.618423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.618756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.618765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.618907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.618915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.619294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.619302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.619625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.619634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.619864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.619873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.620042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.620054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.620347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.620356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.620725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.620735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.621084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.621094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.621410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.621420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.621740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.621762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.621846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.621854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.622212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.622221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.622544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.622554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.622845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.622857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.623188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.623198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.623540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.623550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.623758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.623767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.624106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.624116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.624439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.624449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.624771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.624783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.625182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.625194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.625526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.625535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.625871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.625881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.626269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.626279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.626642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.626652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.626984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.626995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.627184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.627193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.627380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.627390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.627735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.627757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.627995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.628005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.628342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.628352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.628685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.628695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.629011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.629022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.891 [2024-11-20 06:45:45.629318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.891 [2024-11-20 06:45:45.629329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.891 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.629645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.629656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.629972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.629982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.630311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.630323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.630653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.630663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.631017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.631028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.631393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.631402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.631713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.631724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.632040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.632049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.632419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.632428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.632828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.632837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.633181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.633192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.633514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.633524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.633715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.633726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.633970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.633979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.634321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.634329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.634645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.634653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.635010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.635020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.635347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.635356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.635650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.635659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.635967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.635978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.636298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.636306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.636630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.636640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.636927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.636937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.637238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.637246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.637577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.637586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.637898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.637908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.638262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.638269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.638568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.638575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.638789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.638798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.639128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.639137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.639461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.639470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.639819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.639827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.640173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.640181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.640502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.640511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.640821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.640830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.641044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.641051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.641344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.892 [2024-11-20 06:45:45.641353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.892 qpair failed and we were unable to recover it. 00:34:25.892 [2024-11-20 06:45:45.641671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.641680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.642033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.642041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.642358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.642368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.642695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.642705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.642997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.643006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.643194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.643202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.643436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.643444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.643802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.643812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.644126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.644135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.644458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.644466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.644786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.644796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.645173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.645182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.645500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.645508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.645823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.645834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.646162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.646170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.646485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.646493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.646825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.646834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.647037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.647045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.647369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.647378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.647692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.647700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.647995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.648003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.648294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.648302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.648494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.648503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.648818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.648827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.649160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.649170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.649357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.649366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.649696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.649704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.650028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.650036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.650358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.650367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.650757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.650767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.651049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.651058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.651393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.651402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.651728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.651737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.652076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.652087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.893 [2024-11-20 06:45:45.652405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.893 [2024-11-20 06:45:45.652415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.893 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.652735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.652752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.653143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.653153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.653479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.653487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.653713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.653720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.654114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.654123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.654441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.654450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.654727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.654734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.655042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.655050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.655371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.655379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.655687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.655694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.655912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.655920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.656235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.656243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.656567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.656576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.656905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.656915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.657140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.657151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.657368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.657380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.657707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.657716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.658041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.658052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.658371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.658381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.658700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.658710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.659040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.659049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.659386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.659393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.659701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.659708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.660062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.660071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.660255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.660265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.660641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.660648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.660952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.660961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.661288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.661295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.661624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.661632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.661921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.661932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.662256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.662265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.662450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.662458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.662678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.662690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.663018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.663027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.663349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.663359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.663683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.663692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.664022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.664032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.664350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.894 [2024-11-20 06:45:45.664359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.894 qpair failed and we were unable to recover it. 00:34:25.894 [2024-11-20 06:45:45.664681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.664693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.665029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.665038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.665247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.665256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.665579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.665588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.665996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.666007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.666310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.666319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.666637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.666644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.666965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.666974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.667299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.667307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.667631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.667639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.667978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.667986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.668292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.668301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.668669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.668678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.668962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.668970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.669300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.669307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.669653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.669662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.670012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.670021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.670315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.670324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.670650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.670663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.670982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.670993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.671312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.671319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.671658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.671667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.671964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.671975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.672265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.672277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.672456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.672464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.672460] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:25.895 [2024-11-20 06:45:45.672534] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.895 [2024-11-20 06:45:45.672801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.672814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.673018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.673029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.673346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.673354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.673683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.673691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.674017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.674027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.674358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.674367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.674715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.674727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.674913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.674926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.675273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.675284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.675607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.675619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.675968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.675978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.676184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.895 [2024-11-20 06:45:45.676195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.895 qpair failed and we were unable to recover it. 00:34:25.895 [2024-11-20 06:45:45.676539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.676550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.676879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.676890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.677214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.677226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.677581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.677594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.677873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.677885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.678229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.678238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.678592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.678951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.678962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.679283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.679293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.679482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.679495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.679586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.679594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.679991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.680008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.680314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.680328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.680642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.680652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.680961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.680972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.681392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.681405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.681723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.681733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.681953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.681962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.682297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.682307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.682591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.682600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.682978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.682987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.683323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.683332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.683567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.683582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.683894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.684226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.684235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.684561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.684571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.684917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.684926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.685244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.685253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.685577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.685586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.685814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.685825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.686031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.686039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.686368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.686378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.686723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.686734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.687052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.687061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.687387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.687403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.687715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.687724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.688060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.688070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.688396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.688405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.896 [2024-11-20 06:45:45.688763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.896 [2024-11-20 06:45:45.688773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.896 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.689112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.689120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.689440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.689449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.689772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.689780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.690119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.690136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.690355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.690363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.690594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.690603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.690813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.690821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.691154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.691162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.691495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.691503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.691819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.691827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.692045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.692053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.692415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.692423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.692752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.692760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.693057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.693065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.693383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.693391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.693631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.693638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.693867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.693875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.694271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.694282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.694603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.694612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.694960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.694969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.695173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.695180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.695453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.695461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.695786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.695795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.696220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.696230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.696556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.696563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.696880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.696889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.697210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.697218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.697431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.697439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.697780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.697788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.698095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.698104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.698422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.698430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.698631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.698639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.699000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.699010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.699365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.699373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.897 qpair failed and we were unable to recover it. 00:34:25.897 [2024-11-20 06:45:45.699665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.897 [2024-11-20 06:45:45.699673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.700047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.700055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.700363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.700374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.700697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.700704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.701004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.701012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.701333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.701341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.701651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.701659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.701981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.701989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.702306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.702315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.702643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.702650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.702863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.702873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.703140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.703149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.703468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.703476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.703784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.703792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.703996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.704005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.704309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.704316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.704634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.704643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.705041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.705051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.705248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.705257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.705578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.705586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.705781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.705791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.706110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.706119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.706430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.706438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.706752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.706762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.707081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.707089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.707393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.707401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.707687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.707697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.708029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.708039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.708423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.708433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.708752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.708764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.709085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.709092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.709418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.709427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.709758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.709767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.710089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.710097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.710284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.710292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.710589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.710597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.710911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.710918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.711241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.711250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.711564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.898 [2024-11-20 06:45:45.711572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.898 qpair failed and we were unable to recover it. 00:34:25.898 [2024-11-20 06:45:45.711897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.711906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.712223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.712231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.712558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.712566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.712870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.712878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.713215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.713223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.713555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.713564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.713764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.713773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.714168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.714177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.714467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.714475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.714827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.714838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.715054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.715062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.715297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.715305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.715684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.715693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.716018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.716027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.716336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.716351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.716699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.716708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.717018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.717026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.717292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.717300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.717626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.717635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.717845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.717857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.718199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.718207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.718556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.718565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.718881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.718888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.719094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.719102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.719424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.719432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.719623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.719631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.719962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.719970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.720313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.720320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.720637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.720645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.720842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.720851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.721193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.721201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.721517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.721528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.721865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.721872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.722208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.722216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.722531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.722539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.722845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.722853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.723189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.723196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.723509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.723517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.899 qpair failed and we were unable to recover it. 00:34:25.899 [2024-11-20 06:45:45.723855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.899 [2024-11-20 06:45:45.723863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.724188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.724196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.724589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.724596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.724885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.724892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.725226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.725234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.725541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.725550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.725892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.725900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.726093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.726101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.726421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.726431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.726754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.726762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.726987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.726997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.727332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.727340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.727662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.727669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.727999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.728007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.728326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.728333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.728659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.728666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.728888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.728898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.729238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.729246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.729563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.729572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.729765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.729775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.730082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.730093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.730418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.730426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.730731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.730740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.731057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.731064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.731393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.731401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.731719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.731727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.732047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.732055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.732381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.732389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.732555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.732564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.732923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.732932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.733258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.733266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.733583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.733591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.733795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.733804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.734132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.734142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.734464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.734472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.734810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.734819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.735179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.735187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.735400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.735407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.900 [2024-11-20 06:45:45.735762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.900 [2024-11-20 06:45:45.735770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.900 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.736040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.736048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.736370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.736378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.736560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.736570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.736865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.736874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.737229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.737239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.737583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.737591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.737916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.737925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.738240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.738248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.738450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.738457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.738782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.738791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.739115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.739123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.739334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.739342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.739656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.739665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.739990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.739998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.740305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.740315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.740635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.740644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.740838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.740849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.741064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.741072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.741399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.741407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.741761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.741945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.741954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.742290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.742298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.742577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.742588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.742912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.742920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.743253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.743261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.743583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.743591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.743911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.743920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.744242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.744249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.744554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.744563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.745011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.745019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.745354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.745362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.745547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.745555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.745922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.745932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.746316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.746324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.746498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.746507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.746712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.746721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.747041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.747050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.747376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.747383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.901 qpair failed and we were unable to recover it. 00:34:25.901 [2024-11-20 06:45:45.747709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.901 [2024-11-20 06:45:45.747716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.748040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.748049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.748353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.748362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.748686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.748693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.748872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.748880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.749180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.749195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.749525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.749533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.749758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.749769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.750055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.750063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.750228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.750236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.750579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.750588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.750901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.750909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.751230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.751238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.751568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.751576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.751896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.751904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.752223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.752231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.752436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.752444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.752646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.752654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.752931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.752940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.753271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.753279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.753625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.753635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.753848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.753858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.754196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.754205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.754415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.754424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.754592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.754600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.754878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.754887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.755219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.755229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.755559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.755568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.755895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.755906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.756074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.756084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.756416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.756425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.756631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.756640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.756998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.757006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.757331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.757352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.757585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.902 [2024-11-20 06:45:45.757594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.902 qpair failed and we were unable to recover it. 00:34:25.902 [2024-11-20 06:45:45.757922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.757932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.758137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.758145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.758451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.758462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.758689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.758703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.759005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.759014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.759205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.759215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.759541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.759549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.759857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.759866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.760064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.760074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.760435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.760443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.760770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.760779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.761088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.761097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.761303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.761311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.761592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.761602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.761917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.761926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.762105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.762114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.762511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.762520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.762852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.762863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.763185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.763194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.763534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.763542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.763882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.763891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.764077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.764086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.764439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.764448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.764772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.764781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.765158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.765166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.765490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.765498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.765818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.765827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.766151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.766159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.766358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.766366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.766697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.766707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.767035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.767044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.767356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.767366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.767685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.767693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 [2024-11-20 06:45:45.768007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.903 [2024-11-20 06:45:45.768027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.903 qpair failed and we were unable to recover it. 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Write completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Read completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.903 Write completed with error (sct=0, sc=8) 00:34:25.903 starting I/O failed 00:34:25.904 Write completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Write completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Write completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Write completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Write completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Write completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Write completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 Read completed with error (sct=0, sc=8) 00:34:25.904 starting I/O failed 00:34:25.904 [2024-11-20 06:45:45.768792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:25.904 [2024-11-20 06:45:45.769251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.769311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d0000b90 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.769683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.769695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.769995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.770004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.770318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.770327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.770654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.770663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.770962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.770971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.771374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.771383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.771570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.771580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.771865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.771874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.772227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.772236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.772559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.772567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.772893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.772903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.773241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.773249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:25.904 [2024-11-20 06:45:45.773569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.904 [2024-11-20 06:45:45.773577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:25.904 qpair failed and we were unable to recover it. 00:34:26.239 [2024-11-20 06:45:45.773900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.239 [2024-11-20 06:45:45.773911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.239 qpair failed and we were unable to recover it. 00:34:26.239 [2024-11-20 06:45:45.774062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:26.239 [2024-11-20 06:45:45.774123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.239 [2024-11-20 06:45:45.774134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.239 qpair failed and we were unable to recover it. 00:34:26.239 [2024-11-20 06:45:45.774490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.239 [2024-11-20 06:45:45.774499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.239 qpair failed and we were unable to recover it. 00:34:26.239 [2024-11-20 06:45:45.774849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.239 [2024-11-20 06:45:45.774858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.239 qpair failed and we were unable to recover it. 00:34:26.239 [2024-11-20 06:45:45.775186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.239 [2024-11-20 06:45:45.775194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.239 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.775524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.775532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.775840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.775849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.775950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.775958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.776265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.776273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.776542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.776551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.776844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.776853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.777035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.777044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.777323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.777332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.777541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.777550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.777733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.777741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.778052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.778060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.778389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.778400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.778704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.778712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.778998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.779007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.779336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.779347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.779671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.779680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.779895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.779906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.780239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.780247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.780567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.780575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.780794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.780805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.781122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.781130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.781453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.781462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.781762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.781771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.782102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.782113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.782433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.782443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.782789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.782799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.783119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.783128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.783326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.783334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.783735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.783744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.784081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.784430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.784440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.784786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.784796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.784902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.784910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.785252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.785260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.785591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.785599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.785926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.785934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.786262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.786270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.240 qpair failed and we were unable to recover it. 00:34:26.240 [2024-11-20 06:45:45.786594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.240 [2024-11-20 06:45:45.786601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.786932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.786940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.787262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.787271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.787592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.787601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.787807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.787815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.788099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.788107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.788433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.788441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.788779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.788788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.789140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.789150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.789478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.789486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.789695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.789703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.789904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.789912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.790257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.790264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.790586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.790594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.790769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.790779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.791056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.791066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.791295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.791303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.791599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.791607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.791948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.791956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.792276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.792283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.792613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.792621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.792932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.792940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.793274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.793282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.793599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.793608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.793921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.793928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.794242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.794250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.794585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.794592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.794906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.794915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.795247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.795255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.795442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.795450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.795817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.795827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.796145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.796153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.796475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.796482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.796839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.796847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.797184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.797192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.797517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.797524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.797836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.797847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.798198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.798206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.241 [2024-11-20 06:45:45.798510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.241 [2024-11-20 06:45:45.798518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.241 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.798848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.798857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.799171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.799180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.799504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.799512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.799837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.799848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.800172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.800180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.800380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.800388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.800674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.800682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.801053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.801063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.801365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.801373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.801698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.801707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.802000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.802009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.802330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.802339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.802715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.802724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.802942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.802950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.803276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.803284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.803613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.803620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.803957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.803966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.804222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.804230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.804443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.804450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.804874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.804882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.805202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.805210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.805366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.805374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.805701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.805708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.806031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.806040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.806348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.806357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.806685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.806694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.807018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.807028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.807354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.807363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.807684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.807693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.808014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.808022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.808345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.808352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.808759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.808769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.808938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.808947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.809277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.809286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.809610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.809618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.809917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.809926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.810245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.810252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.810455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.242 [2024-11-20 06:45:45.810463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.242 qpair failed and we were unable to recover it. 00:34:26.242 [2024-11-20 06:45:45.810840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.810848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.811157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.811165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.811491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.811499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.811828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.811837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.812168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.812175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.812504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.812512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.812778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.812790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.813001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.813009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.813373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.813383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.813697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.813706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.814039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.814048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.814373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.814380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.814557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.814565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.814920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.814928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.815249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.815257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.815588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.815595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.815800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.815808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.816208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.816216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.816540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.816548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.816872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.816879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.817202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.817210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.817408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.817428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.817763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.817773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.818099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.818107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.818431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.818439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.818767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.818775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.819004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.819012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.819347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.819354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.819678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.819686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.820014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.820022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.820345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.820353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.820673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.820683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.821007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.821015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.821337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.821348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.821663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.821671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.821991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.822000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.822205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.822215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.243 qpair failed and we were unable to recover it. 00:34:26.243 [2024-11-20 06:45:45.822539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.243 [2024-11-20 06:45:45.822547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.822863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.822872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.823215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.823223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.823552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.823561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.823892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.823901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.824235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.824245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.824560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.824568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.824766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.824775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.825112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.825119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.825440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.825448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.825801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.825810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.826132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.826142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.826475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.826483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.826679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.826688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.827067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.827078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.827396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.827404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.827534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.244 [2024-11-20 06:45:45.827580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.244 [2024-11-20 06:45:45.827588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.244 [2024-11-20 06:45:45.827596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.244 [2024-11-20 06:45:45.827602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.244 [2024-11-20 06:45:45.827799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.827810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.828143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.828151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.828398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.828406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.828623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.828631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.829085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.829093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.829406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.829413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.829745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.829763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.829638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:26.244 [2024-11-20 06:45:45.829815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:26.244 [2024-11-20 06:45:45.829952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:26.244 [2024-11-20 06:45:45.829952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:26.244 [2024-11-20 06:45:45.830083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.830094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.830438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.830447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.830775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.830784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.831114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.831132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.831461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.831468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.831714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.831723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.832051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.244 [2024-11-20 06:45:45.832061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.244 qpair failed and we were unable to recover it. 00:34:26.244 [2024-11-20 06:45:45.832429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.832437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.832759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.832767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.833080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.833089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.833417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.833425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.833593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.833601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.833918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.833927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.834256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.834264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.834486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.834493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.834824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.834832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.835167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.835175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.835593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.835602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.835737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.835753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.835963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.835971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.836299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.836308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.836619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.836635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.836986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.836995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.837322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.837330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.837652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.837660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.837857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.837866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.838090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.838098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.838306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.838315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.838645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.838652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.838961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.838969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.839322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.839331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.839643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.839651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.839858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.839866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.840241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.840249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.840563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.840571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.840881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.840890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.841220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.841228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.841556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.841564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.841893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.841903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.842228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.842244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.842570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.842578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.842778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.842787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.843194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.843202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.843420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.843428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.843646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.245 [2024-11-20 06:45:45.843653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.245 qpair failed and we were unable to recover it. 00:34:26.245 [2024-11-20 06:45:45.843909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.843920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.844144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.844152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.844502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.844509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.844820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.844829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.845220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.845227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.845593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.845603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.845926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.845935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.846109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.846120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.846314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.846321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.846651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.846662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.846982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.846990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.847325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.847333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.847525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.847534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.847755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.847762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.847986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.847995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.848336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.848345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.848572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.848580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.848895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.848904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.849111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.849119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.849315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.849662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.849671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.850004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.850012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.850323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.850331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.850587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.850595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.850929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.850937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.851262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.851270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.851601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.851609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.851940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.851958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.852323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.852331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.852633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.852640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.853007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.853015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.853321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.853329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.853658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.853666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.853870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.853879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.854230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.854239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.854429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.854437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.854672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.854681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.854860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.854869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.246 qpair failed and we were unable to recover it. 00:34:26.246 [2024-11-20 06:45:45.855179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.246 [2024-11-20 06:45:45.855188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.855556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.855565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.855888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.855895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.856202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.856210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.856533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.856541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.856772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.856780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.857117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.857125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.857455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.857463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.857784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.857792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.858130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.858139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.858464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.858473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.858810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.858819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.859182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.859189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.859516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.859524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.859724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.859733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.860102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.860112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.860439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.860447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.860777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.860785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.861051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.861059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.861375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.861382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.861552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.861560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.861842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.861850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.862077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.862085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.862262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.862275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.862554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.862562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.862888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.862897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.863234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.863243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.863573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.863581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.863918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.863926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.864250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.864258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.864456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.864467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.864808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.864816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.865141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.865150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.865335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.865342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.865723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.865731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.866080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.866090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.866422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.866430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.866783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.247 [2024-11-20 06:45:45.866792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.247 qpair failed and we were unable to recover it. 00:34:26.247 [2024-11-20 06:45:45.867140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.867150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.867478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.867487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.867671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.867683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.867930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.867939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.868286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.868297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.868488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.868496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.868697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.868708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.868867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.868875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.869149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.869159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.869335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.869343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.869669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.869679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.870014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.870024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.870239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.870248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.870567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.870576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.870917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.870926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.871262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.871271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.871583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.871594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.871926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.871934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.872161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.872169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.872515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.872522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.872893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.872903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.873272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.873281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.873611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.873621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.873942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.873951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.874283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.874291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.874618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.874627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.874974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.874983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.875166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.875177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.875567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.875578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.875919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.875927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.876160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.876168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.876353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.876361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.876558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.876567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.876830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.876840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.877191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.877200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.877536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.877544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.877763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.877772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.878085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.878095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.248 qpair failed and we were unable to recover it. 00:34:26.248 [2024-11-20 06:45:45.878174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.248 [2024-11-20 06:45:45.878182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.878275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.878283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.878635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.878645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.878953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.878964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.879140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.879150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.879543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.879555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.879721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.879729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.880061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.880071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.880264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.880273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.880656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.880664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.881012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.881022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.881384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.881597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.881605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.881830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.881838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.882154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.882162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.882363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.882377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.882680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.882690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.883025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.883034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.883364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.883372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.883671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.883680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.883903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.883913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.884096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.884104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.884439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.884448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.884627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.884638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.884933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.884942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.885285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.885294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.885615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.885623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.885827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.885836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.886017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.886027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.886196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.886206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.886557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.886568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.886814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.886825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.887149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.887158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.249 [2024-11-20 06:45:45.887501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.249 [2024-11-20 06:45:45.887509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.249 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.887722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.887730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.887985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.887995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.888321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.888329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.888498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.888506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.888842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.888851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.889153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.889161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.889495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.889504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.889826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.889835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.890144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.890152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.890477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.890485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.890813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.890822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.891155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.891163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.891501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.891512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.891705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.891714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.891967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.891977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.892178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.892187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.892402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.892409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.892701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.892711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.893052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.893061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.893399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.893407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.893731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.893739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.894088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.894098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.894427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.894439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.894763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.894772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.895180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.895188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.895508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.895517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.895821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.895831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.896008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.896015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.896341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.896349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.896519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.896526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.896716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.896724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.896904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.896914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.897115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.897122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.897317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.897324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.897503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.897510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.897811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.897820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.898006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.898014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.250 qpair failed and we were unable to recover it. 00:34:26.250 [2024-11-20 06:45:45.898398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.250 [2024-11-20 06:45:45.898406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.898721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.898729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.899053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.899061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.899360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.899368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.899698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.899706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.900080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.900087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.900406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.900414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.900753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.900762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.901095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.901103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.901406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.901414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.901577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.901585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.901879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.901887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.902226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.902236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.902414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.902422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.902767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.902775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.903083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.903091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.903316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.903324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.903666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.903673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.903870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.903878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.904047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.904057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.904333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.904340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.904507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.904515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.904699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.904707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.905072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.905081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.905389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.905397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.905435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.905442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.905720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.905729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.906058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.906066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.906389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.906398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.906560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.906569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.906915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.906923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.907100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.907109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.907469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.907477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.907797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.907805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.908135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.908143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.908525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.908533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.908848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.908857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.909172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.909180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.251 [2024-11-20 06:45:45.909388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.251 [2024-11-20 06:45:45.909397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.251 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.909627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.909636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.909819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.909828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.910111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.910118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.910316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.910323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.910543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.910551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.910895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.910902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.911229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.911237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.911557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.911565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.911887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.911896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.911936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.911945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.912266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.912551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.912559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.912864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.912872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.913214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.913222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.913634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.913647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.913816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.913824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.914101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.914108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.914470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.914478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.914800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.914809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.915097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.915104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.915438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.915447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.915639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.915646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.915946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.915953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.916123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.916131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.916420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.916427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.916753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.916761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.917103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.917111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.917304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.917312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.917682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.917689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.917999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.918007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.918336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.918345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.918673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.918680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.919014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.919022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.919349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.919358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.919698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.919706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.919910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.919919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.920197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.920206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.920450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.252 [2024-11-20 06:45:45.920457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.252 qpair failed and we were unable to recover it. 00:34:26.252 [2024-11-20 06:45:45.920776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.920786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.921100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.921109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.921419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.921428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.921634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.921643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.922017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.922025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.922330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.922339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.922634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.922643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.922965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.922973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.923325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.923334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.923662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.923672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.923871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.923880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.924071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.924080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.924454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.924464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.924811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.924818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.925230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.925239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.925576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.925585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.925787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.925794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.926113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.926122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.926444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.926453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.926778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.926787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.927112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.927120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.927444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.927453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.927776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.927785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.927865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.927872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.928140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.928148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.928360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.928367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.928727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.928736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.929063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.929072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.929245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.929252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.929534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.929541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.929869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.929878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.930087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.930095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.930425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.930433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.930772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.930781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.253 [2024-11-20 06:45:45.931105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.253 [2024-11-20 06:45:45.931112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.253 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.931296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.931304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.931591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.931598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.931911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.931920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.932142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.932150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.932474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.932482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.932674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.932684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.932863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.932872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.933057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.933064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.933402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.933409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.933750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.933763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.933954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.933961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.934291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.934300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.934473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.934483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.934805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.935063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.935071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.935407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.935416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.935757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.935765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.935926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.935935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.936122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.936131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.936395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.936404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.936615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.936624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.936950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.936957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.937285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.937293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.937621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.937629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.937975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.937983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.938182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.938189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.938478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.938487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.938895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.938905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.939074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.939082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.939429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.939436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.939742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.939756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.940074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.940082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.940470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.940477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.940786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.940794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.941103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.941111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.941432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.941442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.941805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.941816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.942141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.254 [2024-11-20 06:45:45.942150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.254 qpair failed and we were unable to recover it. 00:34:26.254 [2024-11-20 06:45:45.942477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.942487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.942820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.942829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.943172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.943180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.943240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.943249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.943553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.943562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.943725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.943735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.943972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.943980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.944208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.944215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.944419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.944426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.944662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.944670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.945125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.945134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.945539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.945546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.945857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.945865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.946185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.946194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.946495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.946503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.946831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.946839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.947021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.947029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.947363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.947371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.947694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.947703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.948069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.948078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.948271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.948279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.948327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.948333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.948649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.948657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.948865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.949205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.949213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.949394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.949403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.949713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.949721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.950004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.950014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.950204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.950219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.950529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.950537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.950849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.950857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.951181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.951189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.951575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.951584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.951939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.951947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.952169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.952177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.952516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.952526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.952862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.952870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.953221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.255 [2024-11-20 06:45:45.953231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.255 qpair failed and we were unable to recover it. 00:34:26.255 [2024-11-20 06:45:45.953400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.953408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.953733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.953743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.954087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.954094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.954423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.954430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.954639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.954647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.954818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.954826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.955019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.955026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.955231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.955238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.955413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.955421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.955744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.955755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.956085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.956092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.956416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.956423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.956748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.956756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.957083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.957090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.957296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.957304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.957665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.957672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.957879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.957887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.958103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.958111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.958424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.958431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.958799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.958813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.959002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.959010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.959349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.959357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.959716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.959725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.959919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.959928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.960270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.960277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.960588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.960595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.960918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.960926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.961152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.961159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.961492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.961499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.961825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.961833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.962139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.962146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.962502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.962509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.962814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.962822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.963000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.963007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.963320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.963328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.963546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.963553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.963861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.963869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.964207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.964215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.256 [2024-11-20 06:45:45.964392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.256 [2024-11-20 06:45:45.964399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.256 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.964743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.964754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.964930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.964938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.965159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.965167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.965381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.965390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.965713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.965720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.966021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.966028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.966369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.966376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.966705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.966712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.967038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.967045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.967370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.967378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.967544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.967551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.967840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.967847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.968201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.968208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.968534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.968543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.968827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.968835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.969116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.969123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.969461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.969468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.969804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.969811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.970140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.970147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.970478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.970485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.970684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.970691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.970968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.970975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.971020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.971027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.971203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.971210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.971549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.971557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.971879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.971887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.972214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.972221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.972419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.972427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.972639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.972648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.972819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.972827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.973047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.973057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.973402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.973411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.973736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.973744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.973940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.973948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.974156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.974164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.974507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.974516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.974758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.974767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.257 [2024-11-20 06:45:45.975071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.257 [2024-11-20 06:45:45.975079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.257 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.975439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.975446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.975776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.975784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.976140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.976147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.976457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.976472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.976828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.976835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.977128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.977135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.977468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.977475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.977694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.977702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.977882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.977890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.978200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.978207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.978518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.978525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.978857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.978865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.979066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.979074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.979155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.979161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.979441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.979449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.979609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.979617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.979921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.979928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.980287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.980296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.980650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.980657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.980987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.980995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.981173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.981180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.981352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.981358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.981697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.981705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.981880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.981889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.982171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.982178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.982579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.982586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.982759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.982768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.983072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.983079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.983303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.983310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.983479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.983486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.983535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.983542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.983734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.983742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.984083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.984090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.258 [2024-11-20 06:45:45.984385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.258 [2024-11-20 06:45:45.984393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.258 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.984618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.984625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.984671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.984677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.985002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.985010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.985397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.985404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.985706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.985713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.986062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.986069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.986487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.986493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.986662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.986668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.986900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.986908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.987236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.987245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.987571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.987579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.987950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.987959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.988132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.988332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.988340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.988585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.988592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.988904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.988912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.989234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.989241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.989569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.989576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.989963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.989971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.990314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.990321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.990654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.990661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.990880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.990887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.991291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.991299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.991607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.991615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.991805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.991812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.992021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.992028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.992189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.992200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.992456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.992464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.992812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.992820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.993058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.993065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.993276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.993284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.993616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.993624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.993788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.993795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.994134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.994140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.994465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.994471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.994800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.994807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.994987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.259 [2024-11-20 06:45:45.994993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-11-20 06:45:45.995329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.995335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.995668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.995674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.995974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.995981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.996307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.996313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.996643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.996649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.996971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.996979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.997311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.997319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.997686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.997695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.997871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.997880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.998198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.998206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.998396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.998405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.998724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.998733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.999048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.999056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.999234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.999242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.999498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.999507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:45.999876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:45.999884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.000063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.000071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.000443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.000452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.000615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.000623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.000914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.000922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.001099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.001108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.001349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.001358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.001598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.001608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.001785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.001796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.002087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.002097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.002443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.002452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.002628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.002637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.002991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.003000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.003211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.003221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.003563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.003572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.003769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.003781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.004089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.004098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.004429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.004438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.004767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.004776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.005101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.005110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.005439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.005448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.005784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.005793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.006135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.260 [2024-11-20 06:45:46.006145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-11-20 06:45:46.006471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.006480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.006805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.006816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.006983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.006992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.007316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.007324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.007509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.007517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.007826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.007836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.008191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.008200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.008382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.008391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.008714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.008723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.008893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.008904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.009233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.009243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.009581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.009590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.009854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.009863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.010200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.010210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.010548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.010558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.010750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.010759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.011099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.011109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.011441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.011449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.011806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.011816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.012104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.012115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.012306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.012315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.012516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.012525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.012818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.013173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.013181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.013588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.013596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.013771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.013779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.014052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.014061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.014390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.014397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.014496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.014503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.014712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.014721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.014903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.014911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.015231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.015240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.015589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.015597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.015817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.015827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.016179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.016186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.016517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.016525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.016852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.016861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.017047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.017054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.261 [2024-11-20 06:45:46.017398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.261 [2024-11-20 06:45:46.017406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.261 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.017607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.017616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.017962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.017973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.018293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.018303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.018626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.018635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.018984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.018993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.019323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.019332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.019654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.019663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.019947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.019956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.020152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.020160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.020237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.020245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.020541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.020550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.020757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.020767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.021106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.021114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.021399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.021406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.021739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.021753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.022092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.022100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.022432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.022440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.022778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.022787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.023107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.023114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.023417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.023424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.023607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.023615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.023966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.023980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.024281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.024289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.024616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.024623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.024965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.024974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.025301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.025309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.025643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.025650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.025994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.026004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.026196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.026205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.026558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.026567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.026759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.026768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.027098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.027107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.027394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.027401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.262 [2024-11-20 06:45:46.027582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.262 [2024-11-20 06:45:46.027598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.262 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.027923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.027933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.028256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.028264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.028603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.028610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.028919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.028927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.029266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.029273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.029588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.029595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.029814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.029822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.030028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.030036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.030367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.030375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.030598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.030606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.030858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.030865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.031218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.031225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.031417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.031425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.031591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.031599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.031881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.031891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.032228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.032235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.032577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.032584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.032927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.032935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.033258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.033265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.033603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.033610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.033962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.033970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.034283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.034291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.034653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.034661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.034914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.034922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.035258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.035265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.035572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.035580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.035901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.035908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.036253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.036260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.036574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.036581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.036764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.036773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.037065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.037072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.037230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.037239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.037497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.037504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.037903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.037910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.038194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.038201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.038374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.038381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.038710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.038719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.263 [2024-11-20 06:45:46.039037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.263 [2024-11-20 06:45:46.039046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.263 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.039417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.039424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.039772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.039781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.040175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.040182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.040359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.040368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.040709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.040716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.041049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.041368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.041376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.041546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.041555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.041842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.041850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.042059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.042066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.042469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.042476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.042755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.042764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.043061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.043068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.043380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.043386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.043713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.043721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.044052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.044060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.044402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.044409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.044757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.044767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.045070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.045077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.045462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.045470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.045803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.045811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.046096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.046103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.046430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.046437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.046759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.046767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.047155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.047162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.047353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.047368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.047534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.047541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.047876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.047884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.048220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.048227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.048561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.048569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.048779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.048787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.049007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.049014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.049206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.049214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.049610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.049617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.049919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.049926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.050100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.050107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.050151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.264 [2024-11-20 06:45:46.050158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.264 qpair failed and we were unable to recover it. 00:34:26.264 [2024-11-20 06:45:46.050518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.050525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.050576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.050584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.050797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.050805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.051015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.051022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.051359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.051366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.051571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.051579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.051795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.051802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.052135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.052146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.052308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.052316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.052479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.052486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.052776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.052785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.053167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.053174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.053335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.053343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.053676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.053682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.054000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.054007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.054178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.054187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.054477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.054484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.054813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.054820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.055160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.055167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.055377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.055384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.055629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.055638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.055991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.055999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.056194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.056201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.056573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.056581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.056908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.056915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.057247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.057254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.057444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.057452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.057739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.057751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.058069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.058075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.058401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.058409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.058754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.058762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.059069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.059077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.059407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.059415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.059740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.059754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.060075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.060082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.060454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.060461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.060662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.060669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.060713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.060721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.265 qpair failed and we were unable to recover it. 00:34:26.265 [2024-11-20 06:45:46.061048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.265 [2024-11-20 06:45:46.061056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.061382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.061390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.061558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.061564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.061870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.061877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.061925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.061932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.062265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.062272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.062581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.062588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.062759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.062767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.063129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.063136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.063470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.063477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.063774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.063784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.064121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.064130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.064459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.064468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.064814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.064822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.065143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.065150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.065491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.065498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.065690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.065697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.066096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.066105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.066297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.066304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.066475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.066483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.066859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.066866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.067219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.067227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.067430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.067437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.067774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.067781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.068110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.068117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.068290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.068297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.068664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.068671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.068985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.068992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.069296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.069303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.069636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.069643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.069986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.069993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.070211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.070218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.070504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.070513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.070852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.070859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.071213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.071220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.071423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.071430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.071618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.071626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.071816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.071825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.266 [2024-11-20 06:45:46.072189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.266 [2024-11-20 06:45:46.072197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.266 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.072507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.072514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.072674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.072681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.072991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.072998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.073328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.073335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.073514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.073530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.073728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.073735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.073931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.073938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.074313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.074321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.074512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.074519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.074737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.074748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.074938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.074945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.075256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.075263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.075607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.075614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.075929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.075937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.076112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.076120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.076399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.076594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.076602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.076789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.076797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.077099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.077107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.077403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.077410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.077741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.077752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.078095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.078102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.078436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.078444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.078757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.078766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.078949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.078956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.079252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.079599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.079606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.079794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.079801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.080114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.080122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.080336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.080344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.080510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.080518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.080695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.080703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.081059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.267 [2024-11-20 06:45:46.081068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.267 qpair failed and we were unable to recover it. 00:34:26.267 [2024-11-20 06:45:46.081406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.081414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.081628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.081636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.081997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.082004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.082317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.082324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.082530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.082538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.082775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.082782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.082959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.082968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.083257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.083264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.083446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.083454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.083738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.083751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.083922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.083930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.084338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.084345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.084671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.084678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.084978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.084985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.085325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.085332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.085541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.085548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.085919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.085927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.086281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.086288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.086622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.086629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.086845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.086853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.087082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.087090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.087305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.087313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.087480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.087488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.087779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.087788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.087996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.088003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.088342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.088349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.088694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.088701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.089029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.089036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.089246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.089253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.089572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.089579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.089920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.089927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.090253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.090260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.090565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.090572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.090774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.090782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.091157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.091165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.091499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.091508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.091818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.091837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.092133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.268 [2024-11-20 06:45:46.092140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.268 qpair failed and we were unable to recover it. 00:34:26.268 [2024-11-20 06:45:46.092451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.092458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.092639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.092646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.092974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.092982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.093209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.093217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.093543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.093551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.093881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.093888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.094064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.094071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.094468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.094475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.094782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.094789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.095059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.095069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.095451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.095458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.095636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.095644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.096048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.096056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.096264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.096279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.096619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.096626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.096799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.096808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.097178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.097186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.097508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.097515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.097861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.097868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.098199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.098205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.098377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.098384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.098717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.098725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.099129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.099136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.099319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.099327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.099630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.099637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.099978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.099986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.100339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.100346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.100585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.100594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.100890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.100898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.101238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.101245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.101568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.101575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.101910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.101917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.102102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.102109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.102297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.102305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.102476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.102484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.102882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.102890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 [2024-11-20 06:45:46.103061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.269 [2024-11-20 06:45:46.103071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.269 qpair failed and we were unable to recover it. 00:34:26.269 Read completed with error (sct=0, sc=8) 00:34:26.269 starting I/O failed 00:34:26.269 Read completed with error (sct=0, sc=8) 00:34:26.269 starting I/O failed 00:34:26.269 Read completed with error (sct=0, sc=8) 00:34:26.269 starting I/O failed 00:34:26.269 Read completed with error (sct=0, sc=8) 00:34:26.269 starting I/O failed 00:34:26.269 Read completed with error (sct=0, sc=8) 00:34:26.269 starting I/O failed 00:34:26.269 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Read completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 Write completed with error (sct=0, sc=8) 00:34:26.270 starting I/O failed 00:34:26.270 [2024-11-20 06:45:46.103882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:26.270 [2024-11-20 06:45:46.104412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.104473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54dc000b90 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.104718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.104764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54dc000b90 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.105151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.105181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54dc000b90 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.105423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.105452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54dc000b90 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.105739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.105787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54dc000b90 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.106018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.106048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54dc000b90 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.106394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.106405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.106743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.106758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.107104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.107111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.107554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.107562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.107987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.108045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.108414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.108424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.108741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.108761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.109100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.109107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.109430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.109437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.109758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.109767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.110081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.110089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.110418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.110425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.110767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.110944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.110959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.111303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.111311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.111499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.111507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.111808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.111817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.112165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.112173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.112365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.112373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.112569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.112577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.112908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.270 [2024-11-20 06:45:46.112916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.270 qpair failed and we were unable to recover it. 00:34:26.270 [2024-11-20 06:45:46.113090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.113097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.113301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.113310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.113494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.113500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.113824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.113831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.114169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.114177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.114392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.114399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.114727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.114736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.114948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.114956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.115269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.115277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.115624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.115632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.115702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.115713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.115832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.115839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.116020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.116030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.116378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.116385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.116672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.116680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.117076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.117083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.117388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.117397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.117688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.117697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.117882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.117890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.118111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.118119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.118306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.118313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.118502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.118509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.118680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.118687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.118974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.118982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.119307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.119315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.119522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.119530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.119846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.119853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.120190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.120197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.120405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.120412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.120714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.120721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.121071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.121078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.121409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.121417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.121753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.121761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.121957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.121967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.122349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.122356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.122683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.122690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.123106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.123113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.123293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.271 [2024-11-20 06:45:46.123302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.271 qpair failed and we were unable to recover it. 00:34:26.271 [2024-11-20 06:45:46.123654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.123661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.123993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.124000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.124350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.124358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.124521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.124529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.124716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.124724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.125048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.125057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.125388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.125397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.125724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.125732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.126058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.126067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.126385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.126394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.126734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.126742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.126939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.126948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.127272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.127280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.127620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.127629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.127953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.127960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.128293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.128301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.128652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.128658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.128898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.128914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.129266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.129274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.129452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.129459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.129651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.129658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.130101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.130109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.130296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.130308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.272 [2024-11-20 06:45:46.130593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.272 [2024-11-20 06:45:46.130601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.272 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.130993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.131005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.131221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.131233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.131513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.131520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.131835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.131843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.132139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.132146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.132477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.132485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.132809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.132817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.132868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.132874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.133190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.133197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.550 [2024-11-20 06:45:46.133476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.550 [2024-11-20 06:45:46.133484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.550 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.133677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.133685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.133878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.133886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.134242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.134250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.134457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.134464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.134760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.134768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.135177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.135184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.135508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.135515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.135853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.135861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.136204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.136212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.136257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.136266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.136463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.136472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.136808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.136819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.137188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.137195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.137483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.137490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.137670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.137677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.138007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.138014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.138351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.138358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.138675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.138682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.138992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.138999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.139338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.139346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.139549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.139557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.139911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.139919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.140250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.140257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.140434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.140442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.140684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.140691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.141037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.141045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.141241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.141248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.141474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.141482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.141692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.141700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.141890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.141901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.142196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.142203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.142394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.142402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.142724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.142732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.142943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.142952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.143314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.143322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.143519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.143528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.143713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.143722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.143919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.551 [2024-11-20 06:45:46.143929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.551 qpair failed and we were unable to recover it. 00:34:26.551 [2024-11-20 06:45:46.144219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.144227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.144554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.144563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.144938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.144945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.145144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.145151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.145524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.145531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.145863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.145871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.146058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.146065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.146424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.146431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.146604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.146611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.146975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.146983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.147324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.147331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.147680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.147687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.147890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.147898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.148110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.148117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.148417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.148424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.148595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.148601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.148798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.148806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.149104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.149112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.149497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.149505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.149805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.149813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.150137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.150144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.150463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.150471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.150792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.150799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.150987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.150994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.151288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.151295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.151707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.151714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.151891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.151899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.152291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.152298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.152633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.152640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.152971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.152979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.153161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.153169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.153461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.153468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.153805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.153813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.154142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.154150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.154477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.154484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.154800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.154807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.155150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.155157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.155463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.155469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.552 [2024-11-20 06:45:46.155649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.552 [2024-11-20 06:45:46.155657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.552 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.156093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.156100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.156407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.156415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.156587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.156595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.156770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.156778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.157106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.157113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.157448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.157455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.157650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.157657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.158042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.158050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.158221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.158229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.158521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.158528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.158878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.158885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.159226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.159232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.159436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.159443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.159824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.159831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.160190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.160197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.160512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.160520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.160691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.160698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.160878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.160886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.161283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.161290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.161468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.161488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.161823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.161833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.162005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.162011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.162221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.162228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.162545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.162553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.162735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.162743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.162942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.162949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.163237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.163245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.163576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.163585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.163755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.163763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.163962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.163969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.164262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.164269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.164623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.164632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.164963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.164971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.165177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.165185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.165471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.165478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.165802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.165810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.166141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.166148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.166342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.553 [2024-11-20 06:45:46.166350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.553 qpair failed and we were unable to recover it. 00:34:26.553 [2024-11-20 06:45:46.166714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.166721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.167049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.167057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.167377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.167384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.167729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.167736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.167917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.167926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.168168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.168176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.168368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.168377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.168667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.168675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.169084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.169093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.169408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.169416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.169756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.169763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.170082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.170089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.170257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.170264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.170495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.170502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.170779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.170788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.171053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.171060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.171370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.171377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.171568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.171576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.171800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.171807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.172122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.172129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.172330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.172338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.172546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.172553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.172731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.172739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.172978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.172991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.173172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.173179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.173419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.173427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.173653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.173661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.173995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.174004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.174208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.174216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.174508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.174516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.174845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.174853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.175173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.175181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.175531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.175538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.175727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.175735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.175987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.175994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.176324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.176331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.176649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.176656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.177025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.554 [2024-11-20 06:45:46.177032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.554 qpair failed and we were unable to recover it. 00:34:26.554 [2024-11-20 06:45:46.177329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.177338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.177648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.177657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.177964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.177972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.178303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.178310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.178608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.178615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.178928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.178935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.179156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.179164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.179362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.179369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.179600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.179608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.179849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.179857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.180065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.180081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.180389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.180396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.180706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.180717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.181021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.181028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.181353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.181361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.181691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.181699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.181906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.181913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.182111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.182119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.182445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.182452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.182787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.182794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.182964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.182972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.183147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.183154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.183462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.183470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.183806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.183813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.184153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.184160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.184504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.184511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.184845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.184853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.185236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.185243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.185417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.185425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.185609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.185616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.185909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.185919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.186236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.555 [2024-11-20 06:45:46.186243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.555 qpair failed and we were unable to recover it. 00:34:26.555 [2024-11-20 06:45:46.186423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.186430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.186786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.186793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.187108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.187115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.187450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.187458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.187693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.187701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.188024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.188032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.188399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.188406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.188715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.188724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.189049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.189056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.189363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.189372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.189698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.189707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.189791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.189799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.189845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.189854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.190195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.190203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.190553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.190562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.190963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.190971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.191270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.191279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.191504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.191513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.191708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.191716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.192055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.192064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.192384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.192393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.192719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.192730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.193074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.193081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.193387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.193396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.193734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.193743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.194038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.194046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.194364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.194371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.194549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.194556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.194785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.194793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.194976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.194984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.195223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.195230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.195426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.195434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.195806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.195813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.196026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.196034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.196084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.196091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.196491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.196500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.196835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.196843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.197166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.197175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.197499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.556 [2024-11-20 06:45:46.197506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.556 qpair failed and we were unable to recover it. 00:34:26.556 [2024-11-20 06:45:46.197809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.197816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.198034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.198043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.198459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.198466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.198645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.198654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.198829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.198837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.199285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.199293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.199464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.199471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.199780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.199787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.200028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.200036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.200270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.200279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.200515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.200522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.200832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.200841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.200888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.200896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.201220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.201227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.201557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.201564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.201863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.201870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.202261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.202268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.202461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.202470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.202758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.202766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.202966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.202976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.203269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.203276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.203654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.203661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.204083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.204091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.204401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.204408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.204738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.204750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.205057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.205065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.205405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.205414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.205734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.205741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.206052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.206061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.206387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.206395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.206726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.206736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.207090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.207098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.207451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.207460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.207688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.207697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.207894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.207901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.208244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.208251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.208589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.208597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.209047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.557 [2024-11-20 06:45:46.209055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.557 qpair failed and we were unable to recover it. 00:34:26.557 [2024-11-20 06:45:46.209310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.209319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.209510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.209517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.209716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.209727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.210062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.210072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.210263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.210271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.210666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.210674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.210858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.210867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.211038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.211045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.211361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.211367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.211685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.211692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.212130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.212137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.212459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.212466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.212799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.212811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.213120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.213127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.213467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.213475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.213675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.213683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.214043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.214052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.214293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.214302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.214641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.214651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.214841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.214849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.215044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.215052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.215337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.215345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.215671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.215679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.216010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.216018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.216355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.216363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.216677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.216685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.217007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.217016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.217351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.217360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.217653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.217662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.218004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.218012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.218353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.218362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.218686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.218694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.219000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.219008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.219189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.219198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.219245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.219253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.219488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.219496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.219678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.219686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.219931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.558 [2024-11-20 06:45:46.219939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.558 qpair failed and we were unable to recover it. 00:34:26.558 [2024-11-20 06:45:46.220148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.220164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.220507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.220518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.220717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.220725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.220851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.220859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.221148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.221157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.221558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.221566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.221874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.221882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.222249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.222257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.222569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.222577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.222800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.222808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.223012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.223020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.223339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.223346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.223686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.223693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.223994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.224001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.224334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.224343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.224714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.224722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.224815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.224824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.225014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.225021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.225194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.225204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.225487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.225496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.225825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.225833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.226061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.226069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.226300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.226307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.226478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.226486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.226765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.226774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.227084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.227092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.227420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.227428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.227783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.227791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.228100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.228110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.228437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.228446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.228779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.228789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.229149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.229159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.229507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.229516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.229688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.229698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.230016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.559 [2024-11-20 06:45:46.230025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.559 qpair failed and we were unable to recover it. 00:34:26.559 [2024-11-20 06:45:46.230204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.230213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.230504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.230513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.230687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.230696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.230940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.230949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.231171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.231187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.231407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.231416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.231611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.231621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.231974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.231987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.232315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.232323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.232654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.232663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.232833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.232841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.233192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.233200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.233370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.233378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.233788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.233796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.233984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.233994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.234287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.234296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.234619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.234627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.234972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.234982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.235268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.235277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.235461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.235469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.235659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.235667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.235844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.235852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.236140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.236149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.236389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.236398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.236587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.236596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.236915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.236922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.237115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.237124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.237429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.237436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.237641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.237650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.237990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.237999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.238328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.238338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.238710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.238719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.239040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.239048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.239231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.239245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.239581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.239589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.239906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.239914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.240259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.240268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.240454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.240464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.560 qpair failed and we were unable to recover it. 00:34:26.560 [2024-11-20 06:45:46.240763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.560 [2024-11-20 06:45:46.240771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.241076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.241084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.241394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.241402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.241715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.241723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.242051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.242062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.242436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.242447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.242638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.242648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.242870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.242880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.243101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.243110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.243338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.243348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.243580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.243589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.243785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.243794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.244062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.244069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.244467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.244476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.244656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.244664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.244992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.245001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.245316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.245324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.245675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.245684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.246039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.246047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.246468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.246476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.246783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.246792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.247131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.247140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.247306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.247313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.247515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.247524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.247717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.247725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.247911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.247920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.248222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.248231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.248569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.248893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.248901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.249100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.249108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.249350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.249358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.249538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.249546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.249753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.249762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.249942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.249950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.250275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.250283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.250613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.250621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.250806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.250815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.251103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.251114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.561 [2024-11-20 06:45:46.251442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.561 [2024-11-20 06:45:46.251450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.561 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.251776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.251784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.252095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.252103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.252396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.252403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.252738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.252757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.253077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.253086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.253282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.253289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.253658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.253665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.253916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.253926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.254284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.254292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.254378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.254384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.254723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.254732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.255063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.255073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.255401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.255410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.255752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.255762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.256078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.256087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.256398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.256405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.256722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.256730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.256969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.256977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.257313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.257321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.257649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.257658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.257890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.257899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.258235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.258245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.258635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.258643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.259061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.259401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.259408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.259739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.259755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.260111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.260121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.260302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.260311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.260669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.260678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.260980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.260989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.261290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.261297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.261579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.261588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.261872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.261882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.262211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.262220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.262546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.262557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.262877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.262885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.263058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.263066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.263356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.562 [2024-11-20 06:45:46.263365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.562 qpair failed and we were unable to recover it. 00:34:26.562 [2024-11-20 06:45:46.263595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.263602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.263791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.263801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.264002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.264011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.264383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.264393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.264467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.264475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.264629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.264638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.264812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.264819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.265032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.265039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.265223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.265231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.265574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.265584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.265767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.265777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.265818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.265825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.265912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.265920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.266230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.266239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.266555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.266563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.266910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.266919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.267219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.267228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.267559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.267569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.267924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.267933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.268274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.268281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.268655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.268664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.268858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.268868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.269230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.269239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.269576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.269584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.269925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.269934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.270235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.270242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.270577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.270756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.270763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.271093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.271104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.271328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.271337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.271652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.271661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.271988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.271996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.272169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.272184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.272523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.272532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.272702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.272711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.273048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.273059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.273397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.273578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.273585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.563 qpair failed and we were unable to recover it. 00:34:26.563 [2024-11-20 06:45:46.273974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.563 [2024-11-20 06:45:46.273982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.274307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.274314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.274689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.274696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.274740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.274751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.274936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.274944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.275282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.275291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.275625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.275632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.275936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.275944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.276289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.276298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.276608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.276615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.276924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.276932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.277273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.277282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.277515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.277523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.277723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.277730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.277940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.277949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.278156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.278165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.278361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.278368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.278568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.278576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.278909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.278918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.279139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.279154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.279481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.279488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.279806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.279816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.280137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.280151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.280485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.280495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.280902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.280910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.281227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.281234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.281433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.281446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.282002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.282109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.282570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.282606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.282930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.282966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.283351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.283382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.283781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.283814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.284189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.284218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.284574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.284603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.564 [2024-11-20 06:45:46.284989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.564 [2024-11-20 06:45:46.285046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.564 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.285324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.285334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.285544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.285552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.285954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.285962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.286287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.286294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.286631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.286638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.286855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.286864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.287190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.287197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.287538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.287545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.287753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.287764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.288080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.288088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.288263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.288272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.288473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.288481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.288719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.288727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.289065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.289074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.289115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.289124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.289306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.289315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.289529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.289538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.289889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.289898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.290246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.290254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.290445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.290454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.290781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.290789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.290868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.290876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.291219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.291227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.291566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.291578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.291911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.291920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.292097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.292106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.292516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.292525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.292843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.292853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.293186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.293194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.293391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.293400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.293603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.293611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.293961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.293970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.294165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.294175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.294396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.294404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.294736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.294751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.295118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.295127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.295467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.295476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.565 [2024-11-20 06:45:46.295675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.565 [2024-11-20 06:45:46.295684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.565 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.296000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.296010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.296370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.296379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.296726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.296735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.297061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.297070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.297398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.297407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.297744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.297760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.298072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.298080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.298404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.298412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.298582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.298591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.298926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.298935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.299319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.299327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.299665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.299673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.299996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.300008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.300196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.300205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.300403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.300412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.300618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.300626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.300821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.300829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.301128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.301138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.301369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.301381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.301422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.301429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.301806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.301816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.301992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.302000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.302218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.302226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.302507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.302515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.302691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.302698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.302932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.302940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.303254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.303264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.303609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.303619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.303930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.303938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.304262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.304271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.304606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.304614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.304955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.304964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.305286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.305294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.305621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.305629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.305966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.305974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.306300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.306308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.306499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.306507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.306856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.566 [2024-11-20 06:45:46.306864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.566 qpair failed and we were unable to recover it. 00:34:26.566 [2024-11-20 06:45:46.307213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.307220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.307337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.307343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.307676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.307684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.307880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.307889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.308341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.308348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.308665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.308672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.308876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.308884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.309090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.309097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.309427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.309434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.309754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.309763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.310144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.310154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.310499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.310507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.310836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.310843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.311017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.311024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.311349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.311356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.311532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.311542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.311909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.311917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.312109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.312116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.312387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.312394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.312600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.312607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.312967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.312975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.313026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.313032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.313351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.313358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.313532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.313585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.313929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.313937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.314105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.314113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.314341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.314348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.314534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.314542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.314819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.314826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.315025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.315039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.315589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.315693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.316186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.316226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.316585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.316615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f54d8000b90 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.317014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.317025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.317321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.317329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.317672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.317679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.317986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.317993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.318341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.567 [2024-11-20 06:45:46.318348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.567 qpair failed and we were unable to recover it. 00:34:26.567 [2024-11-20 06:45:46.318522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.318530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.318828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.318837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.319148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.319155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.319375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.319382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.319680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.319690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.319909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.319916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.320270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.320277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.320481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.320490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.320821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.320828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.321143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.321150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.321478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.321485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.321685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.321693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.321982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.321990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.322341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.322348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.322709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.322719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.322938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.322947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.323353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.323361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.323587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.323594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.323911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.323922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.324101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.324108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.324293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.324301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.324588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.324596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.324635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.324643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.324995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.325004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.325073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.325081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.325369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.325379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.325715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.325724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.326049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.326059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.326233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.326242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.326595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.326604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.326928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.326938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.327291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.327301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.327635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.327645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.327866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.327875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.328220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.328229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.328580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.328588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.328776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.328786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.329022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.329032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.568 qpair failed and we were unable to recover it. 00:34:26.568 [2024-11-20 06:45:46.329353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.568 [2024-11-20 06:45:46.329361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.329688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.329696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.329850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.329859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.330033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.330041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.330328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.330338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.330673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.330683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.331023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.331031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.331377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.331388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.331716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.331725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.331827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.331835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.332111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.332120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.332499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.332508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.332828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.333270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.333278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.333612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.333620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.333837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.333846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.334215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.334223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.334457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.334464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.334512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.334518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.334843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.334852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.335038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.335046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.335296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.335304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.335639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.335651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.335984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.335993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.336332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.336341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.336542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.336551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.336888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.336900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.337069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.337082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.337321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.337329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.337616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.337624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.337953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.337964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.338131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.338141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.338494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.338502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.338705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.569 [2024-11-20 06:45:46.338716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.569 qpair failed and we were unable to recover it. 00:34:26.569 [2024-11-20 06:45:46.338915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.338922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.339258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.339268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.339617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.339625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.339931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.339940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.340122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.340130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.340457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.340465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.340807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.340815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.341136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.341143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.341314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.341322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.341667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.341675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.341991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.341999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.342339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.342348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.342700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.342709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.343065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.343074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.343269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.343278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.343640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.343648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.343893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.343901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.344278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.344288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.344499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.344508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.344840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.344848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.345174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.345183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.345511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.345518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.345698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.345706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.345921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.345929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.346291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.346300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.346646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.346655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.347005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.347012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.347338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.347346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.347656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.347663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.347857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.347865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.348263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.348272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.348594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.348602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.348938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.348948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.349287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.349297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.349648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.349657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.350002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.350011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.350345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.350354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.350554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.350562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.570 [2024-11-20 06:45:46.350736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.570 [2024-11-20 06:45:46.350752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.570 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.350941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.350950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.351256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.351265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.351583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.351593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.351857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.351867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.352080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.352088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.352258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.352266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.352597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.352606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.352922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.352930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.353010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.353016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.353109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.353118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.353441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.353449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.353494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.353501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.353873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.353881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.354176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.354185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.354519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.354527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.354841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.354850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.355039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.355047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.355411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.355419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.355759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.355768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.356086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.356094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.356447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.356454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.356770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.356780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.357095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.357103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.357359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.357366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.357709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.357717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.357899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.357907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.358118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.358127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.358475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.358483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.358691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.358699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.359033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.359041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.359359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.359367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.359713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.359721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.360031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.360039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.360399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.360407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.360701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.360711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.360933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.360942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.361263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.361271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.361603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.571 [2024-11-20 06:45:46.361611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.571 qpair failed and we were unable to recover it. 00:34:26.571 [2024-11-20 06:45:46.361964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.361972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.362312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.362320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.362662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.362670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.362847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.362855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.363100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.363107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.363340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.363349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.363677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.363684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.363991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.363999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.364165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.364172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.364342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.364352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.364429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.364436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.364643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.364662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.364856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.364864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.365113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.365121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.365450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.365458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.365627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.365633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.365920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.365928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.366126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.366135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.366339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.366348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.366638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.366646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.366999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.367007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.367343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.367352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.367694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.367703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.368067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.368076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.368388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.368396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.368802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.368810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.369062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.369071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.369419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.369427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.369763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.369771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.370097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.370106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.370437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.370774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.370782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.371106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.371155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.371504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.371512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.371893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.371903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.372243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.372253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.372294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.372302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.372590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.372599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.572 [2024-11-20 06:45:46.372823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.572 [2024-11-20 06:45:46.372831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.572 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.373035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.373043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.373341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.373349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.373678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.373687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.374095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.374103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.374419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.374427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.374769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.374777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.375039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.375047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.375340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.375349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.375544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.375551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.375853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.375863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.376040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.376048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.376380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.376388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.376701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.376709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.377015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.377024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.377251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.377259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.377635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.377644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.377940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.377947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.378330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.378339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.378675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.378684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.378894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.378904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.379075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.379091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.379462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.379470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.379793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.379800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.379979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.379985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.380313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.380320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.380659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.380666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.380868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.380879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.381186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.381194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.381548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.381555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.381761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.381772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.381975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.381983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.382212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.382223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.382562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.382570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.382902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.573 [2024-11-20 06:45:46.382909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.573 qpair failed and we were unable to recover it. 00:34:26.573 [2024-11-20 06:45:46.383091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.383104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.383269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.383276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.383461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.383470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.383805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.383815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.384145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.384153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.384499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.384507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.384711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.384721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.385034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.385042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.385392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.385400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.385710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.385717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.386050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.386057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.386368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.386377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.386702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.386711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.386893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.386901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.387268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.387275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.387606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.387612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.387969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.387978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.388305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.388313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.388640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.388651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.388847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.388856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.389058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.389066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.389165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.389173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.389450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.389458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.389787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.389794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.389984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.389992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.390315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.390322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.390492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.390500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.390689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.390699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.390986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.390994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.391171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.391181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.391412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.391420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.391707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.391716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.392032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.392040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.392212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.392219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.392499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.392506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.392703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.392711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.393055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.393063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.393362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.393369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.574 [2024-11-20 06:45:46.393676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.574 [2024-11-20 06:45:46.393686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.574 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.393857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.393864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.394057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.394064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.394412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.394420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.394617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.394624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.394978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.394987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.395206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.395213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.395257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.395263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.395422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.395433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.395854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.395864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.396088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.396105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.396480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.396489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.396807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.396816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.397133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.397143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.397314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.397322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.397691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.397700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.398000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.398009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.398188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.398195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.398559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.398566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.398881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.398889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.399100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.399109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.399332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.399340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.399684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.399691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.400014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.400023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.400191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.400201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.400520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.400530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.400864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.400873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.401099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.401109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.401434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.401442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.401760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.401768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.402186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.402196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.402516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.402526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.402833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.402844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.403013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.403021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.403382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.403389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.403717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.403725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.404040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.404049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.404379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.404387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.404732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.575 [2024-11-20 06:45:46.404741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.575 qpair failed and we were unable to recover it. 00:34:26.575 [2024-11-20 06:45:46.405078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.405088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.405271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.405280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.405462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.405469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.405787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.405795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.406001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.406011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.406215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.406224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.406588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.406596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.407016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.407025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.407326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.407334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.407648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.407655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.407881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.407888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.408270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.408278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.408456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.408464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.408836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.408844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.409180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.409188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.409511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.409523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.409821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.409830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.410157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.410166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.410502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.410512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.410681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.410688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.410986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.410993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.411325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.411333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.411663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.411674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.412005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.412015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.412266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.412275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.412484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.412492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.412685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.412692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.413032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.413041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.413351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.413358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.413687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.413697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.414023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.414032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.414414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.414423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.414755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.414764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.414928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.414936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.415147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.415156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.415461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.415471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.415641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.415648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.415731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.415741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.576 [2024-11-20 06:45:46.416093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.576 [2024-11-20 06:45:46.416102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.576 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.416400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.416408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.416583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.416590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.416892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.416902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.417107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.417115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.417447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.417456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.417645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.417653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.417972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.417981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.418188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.418198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.418244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.418253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.418568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.418579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.418893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.418903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.419240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.419249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.419602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.419611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.419880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.419890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.420231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.420240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.420558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.420566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.420760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.420778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.421068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.421077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.421397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.421406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.421580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.421589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.421853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.421865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.422206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.422215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.422623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.422633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.422676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.422684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.422853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.422862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.423218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.423226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.423566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.423575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.423922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.423930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.424240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.424249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.424421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.424430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.424805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.424814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.425156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.425166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.425476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.425485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.425823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.425833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.426164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.426173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.426490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.426500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.426857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.426866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.427199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.577 [2024-11-20 06:45:46.427207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.577 qpair failed and we were unable to recover it. 00:34:26.577 [2024-11-20 06:45:46.427539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.427547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.427754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.427762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.428102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.428112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.428428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.428437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.428610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.428618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.428846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.428855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.429175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.429184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.429506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.429514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.429726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.429735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.429923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.429933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.430291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.430302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.430473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.430484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.430736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.430758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.431151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.431160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.431350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.431359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.431731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.431741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.432080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.432088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.432434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.432443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.432758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.432769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.432984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.433006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.433310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.433318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.433501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.433509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.433827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.433836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.434014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.434022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.434302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.434312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.434476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.434484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.434764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.434777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.435073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.435083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.435397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.435405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.435721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.435729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.436073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.436082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.436286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.436294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.436678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.436686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.436871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.436882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.437254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.578 [2024-11-20 06:45:46.437264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.578 qpair failed and we were unable to recover it. 00:34:26.578 [2024-11-20 06:45:46.437667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.437676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.437883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.437894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.438102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.438111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.438442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.438451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.438618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.438626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.438844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.438853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.439235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.439244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.439569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.439577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.439909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.439918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.440246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.440256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.440454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.440463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.440803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.440815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.441122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.441133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.441318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.441327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.441667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.441675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.441864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.441888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.442218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.442228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.442409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.442418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.442598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.442607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.442947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.442956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.443128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.443137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.443356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.443365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.443642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.443650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.443689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.443696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.444006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.444016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.444342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.444351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.444692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.444701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.445003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.445013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.445202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.445210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.445595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.445604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.445789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.445797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.445940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.445949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.446142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.446151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.446338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.446346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.446695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.446705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.447022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.447032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.447355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.447363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.447537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.447545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.579 [2024-11-20 06:45:46.447986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.579 [2024-11-20 06:45:46.447995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.579 qpair failed and we were unable to recover it. 00:34:26.580 [2024-11-20 06:45:46.448295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.580 [2024-11-20 06:45:46.448303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.580 qpair failed and we were unable to recover it. 00:34:26.580 [2024-11-20 06:45:46.448654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.580 [2024-11-20 06:45:46.448662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.580 qpair failed and we were unable to recover it. 00:34:26.580 [2024-11-20 06:45:46.448847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.580 [2024-11-20 06:45:46.448856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.580 qpair failed and we were unable to recover it. 00:34:26.580 [2024-11-20 06:45:46.449228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.580 [2024-11-20 06:45:46.449238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.580 qpair failed and we were unable to recover it. 00:34:26.580 [2024-11-20 06:45:46.449643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.580 [2024-11-20 06:45:46.449651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.580 qpair failed and we were unable to recover it. 00:34:26.580 [2024-11-20 06:45:46.449988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.580 [2024-11-20 06:45:46.449997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.580 qpair failed and we were unable to recover it. 00:34:26.580 [2024-11-20 06:45:46.450340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.580 [2024-11-20 06:45:46.450350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.580 qpair failed and we were unable to recover it. 00:34:26.853 [2024-11-20 06:45:46.450567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.853 [2024-11-20 06:45:46.450579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.853 qpair failed and we were unable to recover it. 00:34:26.853 [2024-11-20 06:45:46.450901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.853 [2024-11-20 06:45:46.450914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.853 qpair failed and we were unable to recover it. 00:34:26.853 [2024-11-20 06:45:46.451251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.853 [2024-11-20 06:45:46.451260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.853 qpair failed and we were unable to recover it. 00:34:26.853 [2024-11-20 06:45:46.451586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.853 [2024-11-20 06:45:46.451594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.853 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.451775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.451784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.452136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.452145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.452456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.452465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.452797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.452808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.452916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.452924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.453235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.453244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.453528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.453538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.453824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.453832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.453995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.454003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.454184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.454192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.454524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.454533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.454866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.454874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.455208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.455217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.455548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.455558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.455895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.455905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.456111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.456119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.456306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.456314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.456484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.456492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.456822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.456831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.457202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.457210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.457371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.457380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.457463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.457473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.457836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.457847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.458008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.458018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.458357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.458367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.458595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.458604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.458916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.458926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.459248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.459256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.459422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.459430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.459847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.459856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.460026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.460034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.854 [2024-11-20 06:45:46.460263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.854 [2024-11-20 06:45:46.460272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.854 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.460433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.460443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.460799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.460812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.461151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.461159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.461479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.461488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.461783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.461791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.462146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.462156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.462483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.462492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.462840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.462849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.463184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.463193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.463426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.463434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.463657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.463666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.464065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.464074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.464298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.464306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.464637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.464647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.464978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.464989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.465188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.465197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.465520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.465530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.465732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.465955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.465965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.466249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.466258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.466458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.466466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.466510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.466517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.466831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.466840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.467191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.467201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.467510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.467518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.467871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.467882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.468209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.468218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.468439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.468448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.468807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.468816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.469252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.469260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.469582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.469591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.469920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.469930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.470242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.470250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.470446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.470454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.470841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.470849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.471028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.471038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.471390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.855 [2024-11-20 06:45:46.471398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.855 qpair failed and we were unable to recover it. 00:34:26.855 [2024-11-20 06:45:46.471571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.471579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.471845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.471854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.472196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.472204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.472256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.472262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.472624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.472632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.472832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.472846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.473184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.473193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.473492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.473499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.473899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.473908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.474280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.474288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.474480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.474492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.474850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.474859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.475173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.475182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.475544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.475551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.475738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.475770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.476100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.476108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.476430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.476438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.476797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.476807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.477125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.477134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.477472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.477480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.477822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.477831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.478141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.478150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.478332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.478383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.478602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.478610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.478789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.478800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.479176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.479185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.479285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.479292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.479618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.479627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.479924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.479932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.480265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.480272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.480494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.480501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.480808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.480818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.481140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.481150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.481484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.481493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.481711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.481720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.482048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.482058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.482234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.482243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.482569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.482577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.856 qpair failed and we were unable to recover it. 00:34:26.856 [2024-11-20 06:45:46.482823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.856 [2024-11-20 06:45:46.482830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.482875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.482882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.483112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.483119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.483356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.483363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.483603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.483612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.483957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.483965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.484164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.484172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.484554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.484561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.484789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.484801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.485148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.485156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.485329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.485336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.485664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.485674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.486011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.486019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.486232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.486242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.486549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.486559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.486887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.486897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.487229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.487512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.487519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.487862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.487871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.488202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.488211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.488552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.488559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.488609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.488615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.489034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.489041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.489117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.489123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.489234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.489242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.489321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.489329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.489657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.489666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.489879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.489890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.490064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.490072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.490273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.490280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.490595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.490602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.490907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.490915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.491245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.491253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.491587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.491594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.491922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.491930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.492105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.492117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.492317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.492324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.492620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.857 [2024-11-20 06:45:46.492628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.857 qpair failed and we were unable to recover it. 00:34:26.857 [2024-11-20 06:45:46.492978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.492986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.493330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.493337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.493642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.493649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.493871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.493878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.494232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.494240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.494556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.494565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.494896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.494904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.495230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.495239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.495569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.495919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.495927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.496254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.496263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.496434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.496450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.496794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.496804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.496995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.497002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.497200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.497208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.497520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.497527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.497828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.497837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.498223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.498230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.498408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.498416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.498797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.498807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.499150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.499159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.499353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.499363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.499738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.499753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.499989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.499996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.500341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.500352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.500675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.500682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.500973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.500982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.501311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.501321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.501525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.501532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.501892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.501902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.502118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.502127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.858 [2024-11-20 06:45:46.502445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.858 [2024-11-20 06:45:46.502452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.858 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.502783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.502791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.503126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.503133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.503453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.503460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.503845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.503855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.504084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.504092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.504410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.504418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.504595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.504603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.504835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.504843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.505158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.505166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.505338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.505347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.505819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.505831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.506033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.506049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.506418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.506428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.506831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.506841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.507047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.507057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.507364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.507373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.507594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.507604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.507800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.507810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.508127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.508136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.508325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.508331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:26.859 [2024-11-20 06:45:46.508659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.508671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:34:26.859 [2024-11-20 06:45:46.508968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.508979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.509184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.509199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:26.859 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:26.859 [2024-11-20 06:45:46.509518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.509529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.509715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.509726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.859 [2024-11-20 06:45:46.510064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.510074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.510253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.510262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.510544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.510554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.510752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.510763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.511078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.511087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.511426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.511434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.511782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.511790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.512123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.512130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.512440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.512449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.512775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.512785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.513008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.859 [2024-11-20 06:45:46.513018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.859 qpair failed and we were unable to recover it. 00:34:26.859 [2024-11-20 06:45:46.513224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.513233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.513529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.513536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.513724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.513732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.514058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.514068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.514240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.514250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.514653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.514665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.514983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.514992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.515331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.515341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.515759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.515769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.516102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.516110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.516553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.516561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.516759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.516769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.517058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.517067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.517412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.517422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.517776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.517792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.518211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.518219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.518400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.518408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.518772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.518782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.519103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.519112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.519443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.519452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.519649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.519656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.520045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.520054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.520302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.520313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.520550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.520557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.520766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.520775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.520962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.520971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.521306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.521315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.521657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.521666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.521998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.522007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.522237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.522247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.522425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.522433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.522821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.522832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.523066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.523075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.523244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.523253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.523590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.523599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.523915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.523924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.524260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.524270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.524625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.524634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.860 qpair failed and we were unable to recover it. 00:34:26.860 [2024-11-20 06:45:46.524977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.860 [2024-11-20 06:45:46.524987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.525174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.525182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.525572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.525582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.525918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.525928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.526269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.526280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.526511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.526520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.526840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.526849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.527169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.527176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.527471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.527481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.527813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.527820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.528157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.528168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.528341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.528348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.528682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.528691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.529024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.529032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.529221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.529231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.529531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.529539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.529881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.529889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.530209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.530219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.530544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.530552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.530936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.530946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.531134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.531144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.531282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.531291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.531337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.531345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.531731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.531739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.532088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.532098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.532476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.532485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.532666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.532676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.532970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.532978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.533302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.533311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.533724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.533733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.533939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.533947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.534194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.534202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.534448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.534456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.534758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.534768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.535150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.535157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.535456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.535465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.535684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.535692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.536131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.536140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.861 qpair failed and we were unable to recover it. 00:34:26.861 [2024-11-20 06:45:46.536446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.861 [2024-11-20 06:45:46.536453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.536807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.537150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.537159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.537472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.537481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.537763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.537770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.537972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.537980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.538147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.538159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.538332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.538342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.538677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.538686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.538880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.538889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.539222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.539230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.539563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.539570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.539860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.539870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.540044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.540051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.540379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.540391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.540627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.540636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.540925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.540933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.541271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.541280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.541478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.541488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.541802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.541812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.542150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.542158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.542468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.542478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.542796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.542806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.542994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.543001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.543317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.543326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.543650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.543659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.543992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.544000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.544329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.544338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.544593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.544602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.544805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.544815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.544988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.544995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.545352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.545361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.545533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.545549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.545931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.545939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.546273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.546282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.546616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.546624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.546925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.546933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.547275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.862 [2024-11-20 06:45:46.547287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.862 qpair failed and we were unable to recover it. 00:34:26.862 [2024-11-20 06:45:46.547613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.547621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.547966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.547975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.548159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.548166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.548496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.548503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.548818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.548830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.549160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.549168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.549511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.549520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.549717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.549725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.550101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.550109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.550428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.550437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.550778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.550786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.550947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.550954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.551122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.551130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.551421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.551430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.863 [2024-11-20 06:45:46.551632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.551645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.551831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.551840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:26.863 [2024-11-20 06:45:46.552130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.552141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.863 [2024-11-20 06:45:46.552473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.552485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.863 [2024-11-20 06:45:46.552812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.552825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.553143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.553151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.553489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.553498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.553829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.553838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.554171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.554179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.554514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.554521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.554773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.554782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.554949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.554957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.555315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.555324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.555630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.555637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.555837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.555846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.556064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.556073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.556281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.556288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.556523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.556531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.556844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.863 [2024-11-20 06:45:46.556851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.863 qpair failed and we were unable to recover it. 00:34:26.863 [2024-11-20 06:45:46.557218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.557226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.557563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.557570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.557806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.557814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.557990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.557997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.558285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.558293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.558638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.558647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.559040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.559047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.559256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.559264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.559586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.559594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.559805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.559815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.560199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.560206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.560536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.560545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.560727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.560737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.561101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.561110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.561445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.561453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.561774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.561781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.562114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.562121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.562437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.562447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.562617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.562625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.562797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.562805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.563054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.563063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.563396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.563403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.563596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.563604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.563823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.563831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.564014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.564022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.564321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.564331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.564668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.564675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.565102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.565111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.565327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.565335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.565658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.565667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.566023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.566030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.566358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.566365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.566549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.566557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.566918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.566926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.567254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.567260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.567578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.864 [2024-11-20 06:45:46.567587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.864 qpair failed and we were unable to recover it. 00:34:26.864 [2024-11-20 06:45:46.567800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.567809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.568211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.568219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.568535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.568542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.568752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.568761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.569124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.569132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.569297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.569304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.569585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.569594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.569813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.569821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.570163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.570170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.570344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.570360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.570646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.570653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.570824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.570832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.571170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.571177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.571500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.571821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.571834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.572159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.572167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.572493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.572500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.572687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.572694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.573081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.573088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.573508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.573515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.573813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.573820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.574001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.574011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.574334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.574342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.574650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.574658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.574996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.575004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.575319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.575327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.575649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.575657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.575929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.575938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.576107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.576114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.576388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.576398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.576441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.576448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.576757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.576766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.577106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.577113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.577424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.577431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.577759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.865 [2024-11-20 06:45:46.577766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.865 qpair failed and we were unable to recover it. 00:34:26.865 [2024-11-20 06:45:46.578071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.578079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.578403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.578410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.578740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.578767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.579091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.579099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.579418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.579425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.579761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.579769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.580074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.580084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.580412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.580419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.580593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.580603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.580876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.580885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.581228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.581237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.581426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.581434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.581773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.581781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.581956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.581963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.582339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.582346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.582683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.582690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.583011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.583018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.583332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.583341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.583670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.583678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.583965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.583975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.584279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.584287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.584590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.584597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.584776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.584783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.585229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.585237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.585613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.585622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.585920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.585929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.586267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.586275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.586567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.586575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.586628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.586635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.586805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.586813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.586981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.586988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.587265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.587272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.587471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.587481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.587850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.587859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.588223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.588230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.588555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.588562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.588790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.866 [2024-11-20 06:45:46.588799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.866 qpair failed and we were unable to recover it. 00:34:26.866 [2024-11-20 06:45:46.589206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.589214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.589523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.589532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.589867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.589877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.590213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.590222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.590514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.590522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.590917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.590925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.590991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.590997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.591041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.591048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.591219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.591227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.591567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.591576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.591901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.591911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.592137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.592147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.592494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.592503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.592822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.592830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.593038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.593044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.593446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.593454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.593646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.593655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.593966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.593973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.594143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.594151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.594484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.594493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.594838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.594848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.595159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.595167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.595480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.595488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.595857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.595867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 Malloc0 00:34:26.867 [2024-11-20 06:45:46.596290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.596300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.596591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.596601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.867 [2024-11-20 06:45:46.596922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.596931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.597151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.597157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:26.867 [2024-11-20 06:45:46.597532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.597541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.867 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.867 [2024-11-20 06:45:46.597843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.597856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.598205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.598212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.598588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.598596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.598922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.598932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.599277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.599285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.599608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.599615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.599948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.599956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.867 qpair failed and we were unable to recover it. 00:34:26.867 [2024-11-20 06:45:46.600253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.867 [2024-11-20 06:45:46.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.600453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.600460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.600667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.600675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.601082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.601090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.601451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.601461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.601771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.601781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.601999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.602007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.602196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.602203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.602516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.602525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.602579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.602587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.602643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.602652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.602973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.602981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.603182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.603190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.603406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.603386] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.868 [2024-11-20 06:45:46.603417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.603757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.603953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.603961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.604275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.604281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.604610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.604617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.604940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.604947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.605130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.605138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.605384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.605392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.605567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.605575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.605968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.605978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.606178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.606186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.606381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.606388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.606710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.606717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.606773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.606780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.607082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.607089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.607420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.607428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.607638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.607648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.607979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.607987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.608316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.608323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.868 [2024-11-20 06:45:46.608660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.868 [2024-11-20 06:45:46.608667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.868 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.608850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.608858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.609092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.609100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.609498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.609505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.609805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.609812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.610139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.610147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.610371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.610381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.610716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.610724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.610911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.610923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.611223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.611231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.611492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.611500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.611690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.611697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.611924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.611932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.612134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.612142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.612474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.612481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.612683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.612692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:26.869 [2024-11-20 06:45:46.613066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.869 [2024-11-20 06:45:46.613404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.613412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.869 [2024-11-20 06:45:46.613741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.613754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.614219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.614226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.614548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.614556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.614867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.614878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.615242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.615251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.615570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.615579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.615955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.615963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.616270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.616279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.616636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.616647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.617023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.617031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.617431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.617440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.617771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.617779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.617977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.617986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.618190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.618197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.618541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.618549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.618870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.618881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.619076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.869 [2024-11-20 06:45:46.619084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.869 qpair failed and we were unable to recover it. 00:34:26.869 [2024-11-20 06:45:46.619427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.619438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.619755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.619765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.619968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.619976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.620255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.620265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.620498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.620506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.620865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.620874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.621051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.621059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.621350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.621359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.621559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.621568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.621789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.621799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.622141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.622151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.622476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.622484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.622673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.622681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.622987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.622995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.623322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.623331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.623655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.623665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.623731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.623739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.624044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.624054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.624389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.624399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.870 [2024-11-20 06:45:46.624449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.624458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.624789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.624799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:26.870 [2024-11-20 06:45:46.625014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.625023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.870 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.870 [2024-11-20 06:45:46.625369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.625378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.625672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.625682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.625923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.625932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.626351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.626362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.626555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.626563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.626859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.626868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.627193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.627202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.627496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.627505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.627809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.627817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.628116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.628125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.628448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.628459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.628656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.628665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.628839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.628846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.629141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.870 [2024-11-20 06:45:46.629148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.870 qpair failed and we were unable to recover it. 00:34:26.870 [2024-11-20 06:45:46.629487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.629495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.629820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.629830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.630076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.630083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.630266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.630274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.630567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.630576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.630902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.630912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.631244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.631252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.631418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.631425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.631756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.631764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.632155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.632162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.632346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.632353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.632655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.632662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.632961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.632970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.633143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.633151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.633346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.633355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.633564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.633571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.633913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.633921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.634279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.634287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.634454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.634462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.634841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.634850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.635192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.635200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.635534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.635935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.635943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.636336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.636344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.636647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.871 [2024-11-20 06:45:46.636656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.636976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.636985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:26.871 [2024-11-20 06:45:46.637322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.637331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.871 [2024-11-20 06:45:46.637520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.637533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.871 [2024-11-20 06:45:46.637831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.637840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.638161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.638168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.638484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.638492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.638825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.638833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.639026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.639042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.639411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.639418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.639583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.639592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.639921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.639930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.640109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.871 [2024-11-20 06:45:46.640117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.871 qpair failed and we were unable to recover it. 00:34:26.871 [2024-11-20 06:45:46.640312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.640319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.640545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.640552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.640886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.640894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.641249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.641259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.641673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.641682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.642053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.642062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.642389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.642396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.642736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.642744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.642939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.642947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.643290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.643299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.643667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.872 [2024-11-20 06:45:46.643677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2f010 with addr=10.0.0.2, port=4420 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.643831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.872 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.872 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:26.872 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.872 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.872 [2024-11-20 06:45:46.654727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.872 [2024-11-20 06:45:46.654828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.872 [2024-11-20 06:45:46.654854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.872 [2024-11-20 06:45:46.654860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.872 [2024-11-20 06:45:46.654865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.872 [2024-11-20 06:45:46.654888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.872 06:45:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2906223 00:34:26.872 [2024-11-20 06:45:46.664469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.872 [2024-11-20 06:45:46.664557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.872 [2024-11-20 06:45:46.664574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.872 [2024-11-20 06:45:46.664580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.872 [2024-11-20 06:45:46.664585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.872 [2024-11-20 06:45:46.664599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.674561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.872 [2024-11-20 06:45:46.674623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.872 [2024-11-20 06:45:46.674640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.872 [2024-11-20 06:45:46.674646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.872 [2024-11-20 06:45:46.674650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.872 [2024-11-20 06:45:46.674665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.684456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.872 [2024-11-20 06:45:46.684527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.872 [2024-11-20 06:45:46.684543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.872 [2024-11-20 06:45:46.684548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.872 [2024-11-20 06:45:46.684553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.872 [2024-11-20 06:45:46.684565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.694570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.872 [2024-11-20 06:45:46.694636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.872 [2024-11-20 06:45:46.694652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.872 [2024-11-20 06:45:46.694657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.872 [2024-11-20 06:45:46.694662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.872 [2024-11-20 06:45:46.694673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.704537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.872 [2024-11-20 06:45:46.704628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.872 [2024-11-20 06:45:46.704651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.872 [2024-11-20 06:45:46.704657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.872 [2024-11-20 06:45:46.704662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.872 [2024-11-20 06:45:46.704675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.872 qpair failed and we were unable to recover it. 00:34:26.872 [2024-11-20 06:45:46.714449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.873 [2024-11-20 06:45:46.714507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.873 [2024-11-20 06:45:46.714522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.873 [2024-11-20 06:45:46.714528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.873 [2024-11-20 06:45:46.714532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.873 [2024-11-20 06:45:46.714544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-11-20 06:45:46.724599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.873 [2024-11-20 06:45:46.724664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.873 [2024-11-20 06:45:46.724680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.873 [2024-11-20 06:45:46.724686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.873 [2024-11-20 06:45:46.724691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.873 [2024-11-20 06:45:46.724703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-11-20 06:45:46.734572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.873 [2024-11-20 06:45:46.734633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.873 [2024-11-20 06:45:46.734647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.873 [2024-11-20 06:45:46.734652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.873 [2024-11-20 06:45:46.734657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.873 [2024-11-20 06:45:46.734669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-11-20 06:45:46.744663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.873 [2024-11-20 06:45:46.744725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.873 [2024-11-20 06:45:46.744741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.873 [2024-11-20 06:45:46.744754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.873 [2024-11-20 06:45:46.744764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.873 [2024-11-20 06:45:46.744776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.873 qpair failed and we were unable to recover it. 00:34:26.873 [2024-11-20 06:45:46.754735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.873 [2024-11-20 06:45:46.754803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.873 [2024-11-20 06:45:46.754818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.873 [2024-11-20 06:45:46.754823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.873 [2024-11-20 06:45:46.754828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:26.873 [2024-11-20 06:45:46.754840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.873 qpair failed and we were unable to recover it. 00:34:27.140 [2024-11-20 06:45:46.764713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.140 [2024-11-20 06:45:46.764786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.140 [2024-11-20 06:45:46.764802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.140 [2024-11-20 06:45:46.764807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.140 [2024-11-20 06:45:46.764812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.140 [2024-11-20 06:45:46.764824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.140 qpair failed and we were unable to recover it. 00:34:27.140 [2024-11-20 06:45:46.774782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.140 [2024-11-20 06:45:46.774853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.140 [2024-11-20 06:45:46.774869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.140 [2024-11-20 06:45:46.774874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.140 [2024-11-20 06:45:46.774879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.140 [2024-11-20 06:45:46.774891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.140 qpair failed and we were unable to recover it. 00:34:27.140 [2024-11-20 06:45:46.784763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.140 [2024-11-20 06:45:46.784825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.784843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.784848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.784853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.784866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.794799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.794858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.794873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.794879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.794883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.794896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.804823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.804887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.804901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.804906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.804911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.804923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.814875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.814965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.814980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.814985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.814989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.815001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.824853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.824913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.824927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.824932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.824936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.824949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.834784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.834844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.834869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.834874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.834879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.834891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.845080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.845156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.845171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.845176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.845181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.845193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.855072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.855142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.855157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.855162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.855166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.855178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.865068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.865174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.865188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.865193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.865197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.865209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.875042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.875105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.875120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.875125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.875135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.875146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.885039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.885104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.885120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.885125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.885130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.885142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.895129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.895188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.895202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.895207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.895211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.895223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.141 [2024-11-20 06:45:46.905166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.141 [2024-11-20 06:45:46.905219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.141 [2024-11-20 06:45:46.905233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.141 [2024-11-20 06:45:46.905239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.141 [2024-11-20 06:45:46.905243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.141 [2024-11-20 06:45:46.905255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.141 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.915153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.915215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.915231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.915237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.915242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.915255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.925179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.925244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.925259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.925264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.925269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.925281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.935244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.935317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.935332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.935338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.935342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.935354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.945200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.945262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.945277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.945282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.945287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.945299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.955254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.955303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.955318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.955323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.955327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.955338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.965304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.965402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.965422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.965427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.965432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.965443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.975366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.975450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.975465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.975470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.975475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.975487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.985330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.985392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.985408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.985413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.985418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.985430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:46.995398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:46.995457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:46.995483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:46.995490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:46.995495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:46.995512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:47.005412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:47.005474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:47.005489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:47.005495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:47.005505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:47.005518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:47.015486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:47.015555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:47.015572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:47.015577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:47.015582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:47.015595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:47.025346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:47.025406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:47.025422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:47.025427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:47.025432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:47.025444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:47.035359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.142 [2024-11-20 06:45:47.035423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.142 [2024-11-20 06:45:47.035438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.142 [2024-11-20 06:45:47.035444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.142 [2024-11-20 06:45:47.035448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.142 [2024-11-20 06:45:47.035461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.142 qpair failed and we were unable to recover it. 00:34:27.142 [2024-11-20 06:45:47.045532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.143 [2024-11-20 06:45:47.045599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.143 [2024-11-20 06:45:47.045617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.143 [2024-11-20 06:45:47.045623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.143 [2024-11-20 06:45:47.045627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.143 [2024-11-20 06:45:47.045644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.143 qpair failed and we were unable to recover it. 00:34:27.404 [2024-11-20 06:45:47.055606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.404 [2024-11-20 06:45:47.055676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.404 [2024-11-20 06:45:47.055694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.404 [2024-11-20 06:45:47.055699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.404 [2024-11-20 06:45:47.055704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.404 [2024-11-20 06:45:47.055717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.404 qpair failed and we were unable to recover it. 00:34:27.404 [2024-11-20 06:45:47.065450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.404 [2024-11-20 06:45:47.065514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.404 [2024-11-20 06:45:47.065532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.404 [2024-11-20 06:45:47.065538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.404 [2024-11-20 06:45:47.065542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.404 [2024-11-20 06:45:47.065556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.075597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.075655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.075671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.075676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.075681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.075694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.085564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.085624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.085640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.085645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.085650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.085662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.095692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.095763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.095785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.095790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.095795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.095807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.105696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.105760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.105775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.105781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.105785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.105797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.115699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.115766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.115781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.115786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.115791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.115803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.125716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.125786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.125802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.125807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.125812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.125824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.135796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.135862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.135877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.135882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.135892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.135904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.145837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.145938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.145953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.145958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.145962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.145974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.155859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.155944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.155958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.155963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.155968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.155979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.165867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.165931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.165946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.165952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.165956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.165968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.175973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.176044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.176059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.176064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.176069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.176080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.185950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.186012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.186027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.186033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.186037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.405 [2024-11-20 06:45:47.186049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.405 qpair failed and we were unable to recover it. 00:34:27.405 [2024-11-20 06:45:47.195985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.405 [2024-11-20 06:45:47.196046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.405 [2024-11-20 06:45:47.196060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.405 [2024-11-20 06:45:47.196065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.405 [2024-11-20 06:45:47.196070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.196082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.206023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.206087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.206102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.206107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.206112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.206124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.216115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.216186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.216201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.216206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.216211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.216223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.225937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.225993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.226011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.226017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.226021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.226033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.236097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.236158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.236174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.236179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.236183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.236195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.246142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.246207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.246221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.246226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.246231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.246242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.256053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.256116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.256131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.256136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.256140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.256152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.266181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.266241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.266255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.266260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.266270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.266281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.276262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.276320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.276335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.276341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.276345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.276356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.286269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.286332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.286347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.286352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.286356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.286368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.296312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.296427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.296442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.296447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.296451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.296463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.306339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.306392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.306406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.306411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.306415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.306427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.406 [2024-11-20 06:45:47.316326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.406 [2024-11-20 06:45:47.316383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.406 [2024-11-20 06:45:47.316398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.406 [2024-11-20 06:45:47.316403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.406 [2024-11-20 06:45:47.316408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.406 [2024-11-20 06:45:47.316419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.406 qpair failed and we were unable to recover it. 00:34:27.669 [2024-11-20 06:45:47.326381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.669 [2024-11-20 06:45:47.326503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.669 [2024-11-20 06:45:47.326518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.669 [2024-11-20 06:45:47.326524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.669 [2024-11-20 06:45:47.326529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.669 [2024-11-20 06:45:47.326540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.669 qpair failed and we were unable to recover it. 00:34:27.669 [2024-11-20 06:45:47.336422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.669 [2024-11-20 06:45:47.336486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.669 [2024-11-20 06:45:47.336518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.669 [2024-11-20 06:45:47.336526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.669 [2024-11-20 06:45:47.336531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.669 [2024-11-20 06:45:47.336551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.669 qpair failed and we were unable to recover it. 00:34:27.669 [2024-11-20 06:45:47.346457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.669 [2024-11-20 06:45:47.346528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.669 [2024-11-20 06:45:47.346561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.669 [2024-11-20 06:45:47.346568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.669 [2024-11-20 06:45:47.346573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.669 [2024-11-20 06:45:47.346593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.669 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.356448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.356504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.356543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.356550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.356555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.356575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.366469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.366533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.366552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.366557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.366562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.366577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.376534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.376602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.376619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.376624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.376628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.376641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.386571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.386631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.386648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.386653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.386658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.386671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.396579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.396645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.396659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.396665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.396675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.396687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.406635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.406731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.406752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.406759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.406763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.406776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.416695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.416802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.416817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.416823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.416828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.416841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.426698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.426763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.426779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.426784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.426789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.426801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.436723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.436779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.670 [2024-11-20 06:45:47.436794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.670 [2024-11-20 06:45:47.436800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.670 [2024-11-20 06:45:47.436805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.670 [2024-11-20 06:45:47.436817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.670 qpair failed and we were unable to recover it. 00:34:27.670 [2024-11-20 06:45:47.446629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.670 [2024-11-20 06:45:47.446725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.446742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.446752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.446757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.446769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.456822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.456897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.456912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.456918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.456922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.456935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.466670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.466732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.466752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.466758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.466763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.466775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.476830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.476889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.476904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.476910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.476914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.476927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.486875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.486936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.486963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.486968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.486972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.486985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.496828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.496892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.496908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.496913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.496918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.496930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.506919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.507031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.507052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.507058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.507062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.507079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.516865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.516958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.516974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.516979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.516983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.516996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.526995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.527059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.527079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.527085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.527095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.671 [2024-11-20 06:45:47.527109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.671 qpair failed and we were unable to recover it. 00:34:27.671 [2024-11-20 06:45:47.537007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.671 [2024-11-20 06:45:47.537076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.671 [2024-11-20 06:45:47.537092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.671 [2024-11-20 06:45:47.537097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.671 [2024-11-20 06:45:47.537101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.672 [2024-11-20 06:45:47.537114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.672 qpair failed and we were unable to recover it. 00:34:27.672 [2024-11-20 06:45:47.547026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.672 [2024-11-20 06:45:47.547084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.672 [2024-11-20 06:45:47.547100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.672 [2024-11-20 06:45:47.547105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.672 [2024-11-20 06:45:47.547110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.672 [2024-11-20 06:45:47.547122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.672 qpair failed and we were unable to recover it. 00:34:27.672 [2024-11-20 06:45:47.556950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.672 [2024-11-20 06:45:47.557007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.672 [2024-11-20 06:45:47.557022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.672 [2024-11-20 06:45:47.557028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.672 [2024-11-20 06:45:47.557032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.672 [2024-11-20 06:45:47.557045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.672 qpair failed and we were unable to recover it. 00:34:27.672 [2024-11-20 06:45:47.567137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.672 [2024-11-20 06:45:47.567234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.672 [2024-11-20 06:45:47.567249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.672 [2024-11-20 06:45:47.567255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.672 [2024-11-20 06:45:47.567259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.672 [2024-11-20 06:45:47.567272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.672 qpair failed and we were unable to recover it. 00:34:27.672 [2024-11-20 06:45:47.577173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.672 [2024-11-20 06:45:47.577246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.672 [2024-11-20 06:45:47.577261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.672 [2024-11-20 06:45:47.577267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.672 [2024-11-20 06:45:47.577271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.672 [2024-11-20 06:45:47.577284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.672 qpair failed and we were unable to recover it. 00:34:27.934 [2024-11-20 06:45:47.587172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.934 [2024-11-20 06:45:47.587237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.934 [2024-11-20 06:45:47.587256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.934 [2024-11-20 06:45:47.587262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.934 [2024-11-20 06:45:47.587267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.934 [2024-11-20 06:45:47.587283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.934 qpair failed and we were unable to recover it. 00:34:27.934 [2024-11-20 06:45:47.597170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.934 [2024-11-20 06:45:47.597232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.934 [2024-11-20 06:45:47.597248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.934 [2024-11-20 06:45:47.597253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.934 [2024-11-20 06:45:47.597258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.934 [2024-11-20 06:45:47.597271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.934 qpair failed and we were unable to recover it. 00:34:27.934 [2024-11-20 06:45:47.607209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.934 [2024-11-20 06:45:47.607273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.934 [2024-11-20 06:45:47.607288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.934 [2024-11-20 06:45:47.607294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.934 [2024-11-20 06:45:47.607299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.934 [2024-11-20 06:45:47.607310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.934 qpair failed and we were unable to recover it. 00:34:27.934 [2024-11-20 06:45:47.617277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.934 [2024-11-20 06:45:47.617354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.934 [2024-11-20 06:45:47.617377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.934 [2024-11-20 06:45:47.617382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.934 [2024-11-20 06:45:47.617387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.934 [2024-11-20 06:45:47.617401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.934 qpair failed and we were unable to recover it. 00:34:27.934 [2024-11-20 06:45:47.627275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.934 [2024-11-20 06:45:47.627372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.934 [2024-11-20 06:45:47.627388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.934 [2024-11-20 06:45:47.627393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.934 [2024-11-20 06:45:47.627398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.934 [2024-11-20 06:45:47.627410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.934 qpair failed and we were unable to recover it. 00:34:27.934 [2024-11-20 06:45:47.637190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.934 [2024-11-20 06:45:47.637282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.934 [2024-11-20 06:45:47.637299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.934 [2024-11-20 06:45:47.637304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.934 [2024-11-20 06:45:47.637309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.637321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.647316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.647375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.647392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.647397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.647402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.647414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.657271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.657335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.657350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.657355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.657365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.657376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.667421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.667478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.667492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.667497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.667502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.667514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.677424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.677478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.677493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.677498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.677503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.677514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.687343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.687407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.687422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.687427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.687431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.687443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.697532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.697605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.697631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.697637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.697642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.697659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.707517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.707577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.707595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.707601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.707606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.707621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.717538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.717593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.717610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.717616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.717620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.717633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.727615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.727679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.727694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.727700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.727705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.727718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.737654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.737754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.737770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.737775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.737780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.737794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.747649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.747763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.747787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.747792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.935 [2024-11-20 06:45:47.747798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.935 [2024-11-20 06:45:47.747812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.935 qpair failed and we were unable to recover it. 00:34:27.935 [2024-11-20 06:45:47.757699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.935 [2024-11-20 06:45:47.757766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.935 [2024-11-20 06:45:47.757781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.935 [2024-11-20 06:45:47.757786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.757791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.757803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.767733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.767804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.767818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.767824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.767828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.767840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.777809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.777877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.777893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.777898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.777902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.777914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.787816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.787876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.787891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.787897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.787906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.787919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.797805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.797862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.797877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.797882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.797887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.797899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.807733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.807803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.807818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.807824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.807828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.807840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.817896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.817959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.817973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.817978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.817983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.817995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.827781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.827840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.827857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.827863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.827868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.827883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.837809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.837866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.837883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.837889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.837893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.837906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:27.936 [2024-11-20 06:45:47.847974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.936 [2024-11-20 06:45:47.848040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.936 [2024-11-20 06:45:47.848055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.936 [2024-11-20 06:45:47.848061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.936 [2024-11-20 06:45:47.848065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:27.936 [2024-11-20 06:45:47.848077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:27.936 qpair failed and we were unable to recover it. 00:34:28.198 [2024-11-20 06:45:47.858017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.198 [2024-11-20 06:45:47.858089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.198 [2024-11-20 06:45:47.858104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.198 [2024-11-20 06:45:47.858110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.198 [2024-11-20 06:45:47.858115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.198 [2024-11-20 06:45:47.858127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.198 qpair failed and we were unable to recover it. 00:34:28.198 [2024-11-20 06:45:47.868040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.198 [2024-11-20 06:45:47.868101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.198 [2024-11-20 06:45:47.868116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.198 [2024-11-20 06:45:47.868121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.198 [2024-11-20 06:45:47.868125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.198 [2024-11-20 06:45:47.868138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.198 qpair failed and we were unable to recover it. 00:34:28.198 [2024-11-20 06:45:47.878041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.198 [2024-11-20 06:45:47.878095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.198 [2024-11-20 06:45:47.878114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.198 [2024-11-20 06:45:47.878120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.198 [2024-11-20 06:45:47.878124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.198 [2024-11-20 06:45:47.878136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.198 qpair failed and we were unable to recover it. 00:34:28.198 [2024-11-20 06:45:47.888111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.198 [2024-11-20 06:45:47.888174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.888189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.888194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.888199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.888210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.898153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.898265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.898280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.898286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.898291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.898303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.908138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.908219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.908235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.908242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.908246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.908258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.918182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.918244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.918258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.918264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.918274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.918285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.928168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.928266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.928280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.928285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.928290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.928301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.938271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.938338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.938352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.938358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.938362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.938374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.948153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.948216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.948231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.948236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.948241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.948252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.958298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.958362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.958377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.958382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.958387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.958399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.968344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.968408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.968422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.968427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.968432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.968444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.978366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.978426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.978441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.978446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.978451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.978462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.988376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.988437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.988452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.988457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.988462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.988473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:47.998282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:47.998337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:47.998352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:47.998358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:47.998362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:47.998374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:48.008470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:48.008533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:48.008553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.199 [2024-11-20 06:45:48.008558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.199 [2024-11-20 06:45:48.008564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.199 [2024-11-20 06:45:48.008577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.199 qpair failed and we were unable to recover it. 00:34:28.199 [2024-11-20 06:45:48.018462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.199 [2024-11-20 06:45:48.018538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.199 [2024-11-20 06:45:48.018570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.018577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.018582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.018602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.028488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.028547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.028565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.028570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.028575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.028590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.038492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.038551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.038567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.038572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.038577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.038590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.048482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.048542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.048558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.048563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.048573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.048586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.058626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.058696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.058711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.058716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.058721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.058733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.068593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.068662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.068678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.068684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.068689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.068701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.078619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.078676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.078692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.078698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.078702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.078715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.088721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.088836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.088853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.088859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.088864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.088878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.098738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.098808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.098824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.098829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.098834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.098847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.200 [2024-11-20 06:45:48.108753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.200 [2024-11-20 06:45:48.108809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.200 [2024-11-20 06:45:48.108825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.200 [2024-11-20 06:45:48.108831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.200 [2024-11-20 06:45:48.108836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.200 [2024-11-20 06:45:48.108848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.200 qpair failed and we were unable to recover it. 00:34:28.462 [2024-11-20 06:45:48.118753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.462 [2024-11-20 06:45:48.118847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.462 [2024-11-20 06:45:48.118863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.462 [2024-11-20 06:45:48.118869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.462 [2024-11-20 06:45:48.118874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.462 [2024-11-20 06:45:48.118887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.462 qpair failed and we were unable to recover it. 00:34:28.462 [2024-11-20 06:45:48.128813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.462 [2024-11-20 06:45:48.128877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.462 [2024-11-20 06:45:48.128893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.462 [2024-11-20 06:45:48.128898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.462 [2024-11-20 06:45:48.128903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.462 [2024-11-20 06:45:48.128915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.462 qpair failed and we were unable to recover it. 00:34:28.462 [2024-11-20 06:45:48.138869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.462 [2024-11-20 06:45:48.138928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.462 [2024-11-20 06:45:48.138956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.462 [2024-11-20 06:45:48.138961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.462 [2024-11-20 06:45:48.138966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.462 [2024-11-20 06:45:48.138979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.462 qpair failed and we were unable to recover it. 00:34:28.462 [2024-11-20 06:45:48.148855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.462 [2024-11-20 06:45:48.148924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.462 [2024-11-20 06:45:48.148941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.462 [2024-11-20 06:45:48.148947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.462 [2024-11-20 06:45:48.148952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.462 [2024-11-20 06:45:48.148964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.462 qpair failed and we were unable to recover it. 00:34:28.462 [2024-11-20 06:45:48.158772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.462 [2024-11-20 06:45:48.158834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.462 [2024-11-20 06:45:48.158849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.462 [2024-11-20 06:45:48.158857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.462 [2024-11-20 06:45:48.158862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.462 [2024-11-20 06:45:48.158874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.462 qpair failed and we were unable to recover it. 00:34:28.462 [2024-11-20 06:45:48.168806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.462 [2024-11-20 06:45:48.168872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.462 [2024-11-20 06:45:48.168888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.168893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.168898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.168910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.178858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.178966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.178982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.178987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.178998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.179011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.188974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.189027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.189042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.189048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.189052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.189064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.198994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.199058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.199074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.199080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.199084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.199096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.209050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.209109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.209125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.209130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.209135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.209147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.219114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.219221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.219235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.219241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.219245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.219258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.229093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.229169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.229185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.229190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.229195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.229206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.239133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.239197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.239212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.239218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.239222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.239234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.249167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.249233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.249247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.249253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.249257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.249269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.259251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.259315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.259334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.259340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.259347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.259359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.269085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.269142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.269161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.269166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.269171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.269183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.279237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.279293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.279313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.279319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.279324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.279339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.289321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.289385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.289402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.289407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.463 [2024-11-20 06:45:48.289412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.463 [2024-11-20 06:45:48.289424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.463 qpair failed and we were unable to recover it. 00:34:28.463 [2024-11-20 06:45:48.299272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.463 [2024-11-20 06:45:48.299372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.463 [2024-11-20 06:45:48.299387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.463 [2024-11-20 06:45:48.299392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.299398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.299411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.464 [2024-11-20 06:45:48.309364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.464 [2024-11-20 06:45:48.309418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.464 [2024-11-20 06:45:48.309434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.464 [2024-11-20 06:45:48.309439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.309449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.309461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.464 [2024-11-20 06:45:48.319387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.464 [2024-11-20 06:45:48.319442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.464 [2024-11-20 06:45:48.319458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.464 [2024-11-20 06:45:48.319463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.319468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.319480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.464 [2024-11-20 06:45:48.329315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.464 [2024-11-20 06:45:48.329377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.464 [2024-11-20 06:45:48.329392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.464 [2024-11-20 06:45:48.329398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.329402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.329414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.464 [2024-11-20 06:45:48.339495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.464 [2024-11-20 06:45:48.339560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.464 [2024-11-20 06:45:48.339574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.464 [2024-11-20 06:45:48.339580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.339585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.339596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.464 [2024-11-20 06:45:48.349364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.464 [2024-11-20 06:45:48.349420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.464 [2024-11-20 06:45:48.349446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.464 [2024-11-20 06:45:48.349452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.349457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.349475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.464 [2024-11-20 06:45:48.359496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.464 [2024-11-20 06:45:48.359548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.464 [2024-11-20 06:45:48.359564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.464 [2024-11-20 06:45:48.359569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.359574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.359587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.464 [2024-11-20 06:45:48.369419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.464 [2024-11-20 06:45:48.369485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.464 [2024-11-20 06:45:48.369517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.464 [2024-11-20 06:45:48.369524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.464 [2024-11-20 06:45:48.369529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.464 [2024-11-20 06:45:48.369548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.464 qpair failed and we were unable to recover it. 00:34:28.727 [2024-11-20 06:45:48.379604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.727 [2024-11-20 06:45:48.379671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.727 [2024-11-20 06:45:48.379690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.727 [2024-11-20 06:45:48.379696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.727 [2024-11-20 06:45:48.379700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.727 [2024-11-20 06:45:48.379715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.727 qpair failed and we were unable to recover it. 00:34:28.727 [2024-11-20 06:45:48.389468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.727 [2024-11-20 06:45:48.389520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.727 [2024-11-20 06:45:48.389536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.727 [2024-11-20 06:45:48.389542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.727 [2024-11-20 06:45:48.389547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.727 [2024-11-20 06:45:48.389561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.727 qpair failed and we were unable to recover it. 00:34:28.727 [2024-11-20 06:45:48.399643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.727 [2024-11-20 06:45:48.399708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.727 [2024-11-20 06:45:48.399730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.727 [2024-11-20 06:45:48.399736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.727 [2024-11-20 06:45:48.399741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.727 [2024-11-20 06:45:48.399761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.727 qpair failed and we were unable to recover it. 00:34:28.727 [2024-11-20 06:45:48.409690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.727 [2024-11-20 06:45:48.409764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.727 [2024-11-20 06:45:48.409780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.727 [2024-11-20 06:45:48.409786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.727 [2024-11-20 06:45:48.409791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.727 [2024-11-20 06:45:48.409804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.727 qpair failed and we were unable to recover it. 00:34:28.727 [2024-11-20 06:45:48.419699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.727 [2024-11-20 06:45:48.419786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.727 [2024-11-20 06:45:48.419801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.727 [2024-11-20 06:45:48.419806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.727 [2024-11-20 06:45:48.419811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.727 [2024-11-20 06:45:48.419824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.727 qpair failed and we were unable to recover it. 00:34:28.727 [2024-11-20 06:45:48.429689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.727 [2024-11-20 06:45:48.429762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.727 [2024-11-20 06:45:48.429777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.727 [2024-11-20 06:45:48.429783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.727 [2024-11-20 06:45:48.429787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.727 [2024-11-20 06:45:48.429800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.727 qpair failed and we were unable to recover it. 00:34:28.727 [2024-11-20 06:45:48.439740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.727 [2024-11-20 06:45:48.439804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.439819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.439825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.439834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.439846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.449799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.449861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.449876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.449881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.449886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.449898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.459861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.459933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.459947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.459953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.459957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.459969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.469732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.469795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.469810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.469816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.469820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.469833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.479757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.479845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.479862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.479867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.479872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.479885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.489918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.489984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.490003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.490009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.490013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.490027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.499965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.500039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.500055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.500060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.500065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.500077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.509983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.510047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.510066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.510072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.510077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.510092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.519995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.520060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.520079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.520084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.520089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.520103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.530052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.530117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.530139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.530144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.530149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.530162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.540104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.540175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.540190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.540196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.540201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.540213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.728 [2024-11-20 06:45:48.550090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.728 [2024-11-20 06:45:48.550144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.728 [2024-11-20 06:45:48.550158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.728 [2024-11-20 06:45:48.550163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.728 [2024-11-20 06:45:48.550168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.728 [2024-11-20 06:45:48.550179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.728 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.560101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.560163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.560178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.560183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.560187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.560199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.570168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.570230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.570245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.570250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.570260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.570273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.580285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.580375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.580392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.580397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.580402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.580414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.590202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.590263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.590278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.590284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.590288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.590301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.600189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.600245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.600260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.600265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.600270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.600282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.610260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.610325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.610340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.610346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.610351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.610363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.620316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.620390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.620405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.620411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.620415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.620427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.630192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.630243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.630257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.630262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.630267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.630278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.729 [2024-11-20 06:45:48.640287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.729 [2024-11-20 06:45:48.640336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.729 [2024-11-20 06:45:48.640349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.729 [2024-11-20 06:45:48.640355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.729 [2024-11-20 06:45:48.640360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.729 [2024-11-20 06:45:48.640371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.729 qpair failed and we were unable to recover it. 00:34:28.992 [2024-11-20 06:45:48.650352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.992 [2024-11-20 06:45:48.650425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.992 [2024-11-20 06:45:48.650438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.992 [2024-11-20 06:45:48.650444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.992 [2024-11-20 06:45:48.650448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.992 [2024-11-20 06:45:48.650459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.992 qpair failed and we were unable to recover it. 00:34:28.992 [2024-11-20 06:45:48.660379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.992 [2024-11-20 06:45:48.660440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.992 [2024-11-20 06:45:48.660458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.992 [2024-11-20 06:45:48.660464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.992 [2024-11-20 06:45:48.660468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.992 [2024-11-20 06:45:48.660480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.992 qpair failed and we were unable to recover it. 00:34:28.992 [2024-11-20 06:45:48.670351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.992 [2024-11-20 06:45:48.670398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.992 [2024-11-20 06:45:48.670418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.992 [2024-11-20 06:45:48.670424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.992 [2024-11-20 06:45:48.670429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.992 [2024-11-20 06:45:48.670443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.992 qpair failed and we were unable to recover it. 00:34:28.992 [2024-11-20 06:45:48.680260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.992 [2024-11-20 06:45:48.680306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.992 [2024-11-20 06:45:48.680320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.992 [2024-11-20 06:45:48.680325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.992 [2024-11-20 06:45:48.680329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.992 [2024-11-20 06:45:48.680341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.992 qpair failed and we were unable to recover it. 00:34:28.992 [2024-11-20 06:45:48.690367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.992 [2024-11-20 06:45:48.690442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.992 [2024-11-20 06:45:48.690454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.992 [2024-11-20 06:45:48.690459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.992 [2024-11-20 06:45:48.690463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.992 [2024-11-20 06:45:48.690474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.992 qpair failed and we were unable to recover it. 00:34:28.992 [2024-11-20 06:45:48.700367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.992 [2024-11-20 06:45:48.700422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.992 [2024-11-20 06:45:48.700434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.992 [2024-11-20 06:45:48.700439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.992 [2024-11-20 06:45:48.700449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.992 [2024-11-20 06:45:48.700460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.992 qpair failed and we were unable to recover it. 00:34:28.992 [2024-11-20 06:45:48.710463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.992 [2024-11-20 06:45:48.710510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.710522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.710527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.710531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.710542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.720480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.720523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.720537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.720542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.720548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.720559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.730551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.730602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.730615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.730620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.730624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.730635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.740596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.740665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.740676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.740681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.740686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.740696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.750560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.750601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.750614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.750619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.750623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.750634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.760589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.760631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.760641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.760647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.760652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.760663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.770667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.770718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.770729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.770734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.770739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.770754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.780702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.780761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.780772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.780777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.780782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.780791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.790683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.790729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.790751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.790757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.790761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.790771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.800688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.800736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.800751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.800756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.800760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.800770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.810769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.810822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.810832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.810837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.810842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.810851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.820804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.820851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.820861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.820866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.820870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.820880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.830771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.830809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.993 [2024-11-20 06:45:48.830819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.993 [2024-11-20 06:45:48.830824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.993 [2024-11-20 06:45:48.830831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.993 [2024-11-20 06:45:48.830840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.993 qpair failed and we were unable to recover it. 00:34:28.993 [2024-11-20 06:45:48.840677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.993 [2024-11-20 06:45:48.840719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.994 [2024-11-20 06:45:48.840729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.994 [2024-11-20 06:45:48.840734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.994 [2024-11-20 06:45:48.840738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.994 [2024-11-20 06:45:48.840751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.994 qpair failed and we were unable to recover it. 00:34:28.994 [2024-11-20 06:45:48.850754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.994 [2024-11-20 06:45:48.850808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.994 [2024-11-20 06:45:48.850817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.994 [2024-11-20 06:45:48.850822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.994 [2024-11-20 06:45:48.850827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.994 [2024-11-20 06:45:48.850836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.994 qpair failed and we were unable to recover it. 00:34:28.994 [2024-11-20 06:45:48.860921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.994 [2024-11-20 06:45:48.860972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.994 [2024-11-20 06:45:48.860983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.994 [2024-11-20 06:45:48.860988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.994 [2024-11-20 06:45:48.860992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.994 [2024-11-20 06:45:48.861002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.994 qpair failed and we were unable to recover it. 00:34:28.994 [2024-11-20 06:45:48.870762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.994 [2024-11-20 06:45:48.870805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.994 [2024-11-20 06:45:48.870816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.994 [2024-11-20 06:45:48.870820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.994 [2024-11-20 06:45:48.870825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.994 [2024-11-20 06:45:48.870835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.994 qpair failed and we were unable to recover it. 00:34:28.994 [2024-11-20 06:45:48.880929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.994 [2024-11-20 06:45:48.880997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.994 [2024-11-20 06:45:48.881007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.994 [2024-11-20 06:45:48.881012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.994 [2024-11-20 06:45:48.881016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.994 [2024-11-20 06:45:48.881025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.994 qpair failed and we were unable to recover it. 00:34:28.994 [2024-11-20 06:45:48.891004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.994 [2024-11-20 06:45:48.891055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.994 [2024-11-20 06:45:48.891064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.994 [2024-11-20 06:45:48.891069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.994 [2024-11-20 06:45:48.891073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.994 [2024-11-20 06:45:48.891083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.994 qpair failed and we were unable to recover it. 00:34:28.994 [2024-11-20 06:45:48.901071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:28.994 [2024-11-20 06:45:48.901119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:28.994 [2024-11-20 06:45:48.901129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:28.994 [2024-11-20 06:45:48.901134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:28.994 [2024-11-20 06:45:48.901139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:28.994 [2024-11-20 06:45:48.901148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.994 qpair failed and we were unable to recover it. 00:34:29.256 [2024-11-20 06:45:48.910890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.256 [2024-11-20 06:45:48.910929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.256 [2024-11-20 06:45:48.910939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.256 [2024-11-20 06:45:48.910944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.256 [2024-11-20 06:45:48.910948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.256 [2024-11-20 06:45:48.910957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.256 qpair failed and we were unable to recover it. 00:34:29.256 [2024-11-20 06:45:48.921048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.256 [2024-11-20 06:45:48.921087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.256 [2024-11-20 06:45:48.921100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.256 [2024-11-20 06:45:48.921104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.256 [2024-11-20 06:45:48.921109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.256 [2024-11-20 06:45:48.921118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.256 qpair failed and we were unable to recover it. 00:34:29.256 [2024-11-20 06:45:48.931108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.256 [2024-11-20 06:45:48.931156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.256 [2024-11-20 06:45:48.931165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.256 [2024-11-20 06:45:48.931170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.256 [2024-11-20 06:45:48.931174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.256 [2024-11-20 06:45:48.931183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.256 qpair failed and we were unable to recover it. 00:34:29.256 [2024-11-20 06:45:48.941150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.256 [2024-11-20 06:45:48.941237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.256 [2024-11-20 06:45:48.941247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.256 [2024-11-20 06:45:48.941251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.256 [2024-11-20 06:45:48.941256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.256 [2024-11-20 06:45:48.941265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.256 qpair failed and we were unable to recover it. 00:34:29.256 [2024-11-20 06:45:48.951116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.256 [2024-11-20 06:45:48.951159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:48.951169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:48.951174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:48.951178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:48.951187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:48.961143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:48.961197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:48.961207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:48.961212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:48.961218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:48.961228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:48.971224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:48.971271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:48.971281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:48.971286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:48.971290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:48.971300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:48.981261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:48.981308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:48.981318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:48.981323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:48.981328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:48.981337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:48.991259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:48.991302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:48.991312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:48.991317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:48.991321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:48.991330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.001261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.001303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.001313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.001318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:49.001322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:49.001332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.011321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.011370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.011380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.011385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:49.011389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:49.011398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.021357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.021405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.021415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.021420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:49.021425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:49.021434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.031323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.031374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.031384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.031389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:49.031393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:49.031403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.041360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.041399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.041409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.041414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:49.041418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:49.041427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.051448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.051497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.051509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.051514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:49.051518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:49.051527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.061477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.061529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.061548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.061554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.257 [2024-11-20 06:45:49.061559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.257 [2024-11-20 06:45:49.061572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.257 qpair failed and we were unable to recover it. 00:34:29.257 [2024-11-20 06:45:49.071442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.257 [2024-11-20 06:45:49.071486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.257 [2024-11-20 06:45:49.071505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.257 [2024-11-20 06:45:49.071511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.071516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.071529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.081456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.081545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.081564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.081570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.081575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.081589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.091533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.091612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.091624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.091629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.091636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.091647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.101569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.101619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.101630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.101635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.101639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.101649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.111416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.111462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.111474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.111479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.111484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.111494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.121566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.121608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.121619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.121624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.121628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.121638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.131639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.131690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.131700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.131705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.131710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.131719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.141689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.141736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.141749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.141755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.141759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.141769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.151672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.151710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.151720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.151725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.151729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.151738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.258 [2024-11-20 06:45:49.161558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.258 [2024-11-20 06:45:49.161618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.258 [2024-11-20 06:45:49.161628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.258 [2024-11-20 06:45:49.161633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.258 [2024-11-20 06:45:49.161637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.258 [2024-11-20 06:45:49.161647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.258 qpair failed and we were unable to recover it. 00:34:29.520 [2024-11-20 06:45:49.171765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.520 [2024-11-20 06:45:49.171817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.520 [2024-11-20 06:45:49.171827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.520 [2024-11-20 06:45:49.171832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.520 [2024-11-20 06:45:49.171836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.520 [2024-11-20 06:45:49.171846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.520 qpair failed and we were unable to recover it. 00:34:29.520 [2024-11-20 06:45:49.181800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.520 [2024-11-20 06:45:49.181851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.520 [2024-11-20 06:45:49.181864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.520 [2024-11-20 06:45:49.181869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.520 [2024-11-20 06:45:49.181873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.520 [2024-11-20 06:45:49.181883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.191645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.191691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.191701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.191706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.191710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.191720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.201795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.201838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.201848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.201853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.201858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.201867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.211925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.211976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.211985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.211990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.211994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.212004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.221896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.221944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.221955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.221959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.221966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.221976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.231885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.231928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.231938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.231943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.231947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.231956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.241891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.241937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.241948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.241953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.241958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.241968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.251975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.252047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.252057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.252063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.252067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.252077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.262017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.262065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.262075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.262080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.262085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.262094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.271884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.271929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.271939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.271944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.271948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.271958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.281981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.282024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.282034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.282039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.282044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.282053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.292077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.292175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.292185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.292190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.292194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.292204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.302113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.302214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.302224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.302229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.302233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.521 [2024-11-20 06:45:49.302243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.521 qpair failed and we were unable to recover it. 00:34:29.521 [2024-11-20 06:45:49.312081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.521 [2024-11-20 06:45:49.312126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.521 [2024-11-20 06:45:49.312138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.521 [2024-11-20 06:45:49.312143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.521 [2024-11-20 06:45:49.312147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.312157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.322009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.322051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.322061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.322066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.322070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.322080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.332161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.332229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.332238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.332243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.332247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.332256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.342225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.342275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.342284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.342289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.342294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.342303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.352205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.352244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.352254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.352259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.352266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.352276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.362239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.362280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.362290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.362295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.362299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.362309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.372311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.372409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.372419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.372424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.372428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.372437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.382369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.382452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.382462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.382467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.382472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.382481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.392308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.392367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.392377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.392381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.392386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.392395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.402343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.402385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.402395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.402400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.402404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.402413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.412422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.412470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.412480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.412486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.412490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.412499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.422447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.422531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.422551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.422557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.422561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.422575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.522 [2024-11-20 06:45:49.432439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.522 [2024-11-20 06:45:49.432535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.522 [2024-11-20 06:45:49.432554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.522 [2024-11-20 06:45:49.432560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.522 [2024-11-20 06:45:49.432564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.522 [2024-11-20 06:45:49.432578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.522 qpair failed and we were unable to recover it. 00:34:29.784 [2024-11-20 06:45:49.442411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.784 [2024-11-20 06:45:49.442460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.784 [2024-11-20 06:45:49.442486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.784 [2024-11-20 06:45:49.442492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.784 [2024-11-20 06:45:49.442497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.784 [2024-11-20 06:45:49.442511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.784 qpair failed and we were unable to recover it. 00:34:29.784 [2024-11-20 06:45:49.452627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.784 [2024-11-20 06:45:49.452716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.452728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.452733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.452737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.452751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.462461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.462508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.462518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.462523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.462527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.462537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.472563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.472606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.472617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.472622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.472626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.472636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.482585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.482629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.482640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.482645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.482653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.482663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.492492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.492539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.492549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.492554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.492558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.492568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.502679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.502724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.502734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.502738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.502743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.502756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.512683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.512760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.512772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.512777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.512781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.512792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.522619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.522661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.522672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.522677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.522681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.522691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.532739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.532793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.532803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.532808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.532813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.532822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.542821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.542867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.542877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.542881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.542886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.542896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.552758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.552829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.552839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.552844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.552849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.552858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.562778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.562830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.562842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.562847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.562851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.562861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.572885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.572970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.785 [2024-11-20 06:45:49.572984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.785 [2024-11-20 06:45:49.572988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.785 [2024-11-20 06:45:49.572993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.785 [2024-11-20 06:45:49.573003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.785 qpair failed and we were unable to recover it. 00:34:29.785 [2024-11-20 06:45:49.582875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.785 [2024-11-20 06:45:49.582962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.582972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.582977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.582981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.582991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.592854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.592899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.592909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.592914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.592918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.592928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.602881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.602933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.602943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.602948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.602952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.602962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.612837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.612886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.612896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.612901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.612908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.612918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.622997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.623046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.623056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.623061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.623065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.623074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.633018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.633107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.633116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.633121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.633125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.633134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.642984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.643026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.643036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.643041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.643045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.643054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.652942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.653001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.653011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.653016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.653020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.653029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.663098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.663148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.663159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.663163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.663168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.663177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.672953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.672999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.673009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.673014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.673018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.673027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.682977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.683020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.683030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.683035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.683039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.786 [2024-11-20 06:45:49.683048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.786 qpair failed and we were unable to recover it. 00:34:29.786 [2024-11-20 06:45:49.693177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:29.786 [2024-11-20 06:45:49.693273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:29.786 [2024-11-20 06:45:49.693283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:29.786 [2024-11-20 06:45:49.693288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:29.786 [2024-11-20 06:45:49.693292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:29.787 [2024-11-20 06:45:49.693302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:29.787 qpair failed and we were unable to recover it. 00:34:30.048 [2024-11-20 06:45:49.703210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.048 [2024-11-20 06:45:49.703259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.048 [2024-11-20 06:45:49.703271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.048 [2024-11-20 06:45:49.703276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.048 [2024-11-20 06:45:49.703280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.048 [2024-11-20 06:45:49.703289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.048 qpair failed and we were unable to recover it. 00:34:30.048 [2024-11-20 06:45:49.713174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.048 [2024-11-20 06:45:49.713271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.048 [2024-11-20 06:45:49.713281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.048 [2024-11-20 06:45:49.713286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.048 [2024-11-20 06:45:49.713290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.048 [2024-11-20 06:45:49.713300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.048 qpair failed and we were unable to recover it. 00:34:30.048 [2024-11-20 06:45:49.723218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.048 [2024-11-20 06:45:49.723260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.048 [2024-11-20 06:45:49.723270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.048 [2024-11-20 06:45:49.723275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.048 [2024-11-20 06:45:49.723279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.048 [2024-11-20 06:45:49.723288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.048 qpair failed and we were unable to recover it. 00:34:30.048 [2024-11-20 06:45:49.733292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.048 [2024-11-20 06:45:49.733341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.048 [2024-11-20 06:45:49.733351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.048 [2024-11-20 06:45:49.733356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.048 [2024-11-20 06:45:49.733361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.048 [2024-11-20 06:45:49.733370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.048 qpair failed and we were unable to recover it. 00:34:30.048 [2024-11-20 06:45:49.743317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.048 [2024-11-20 06:45:49.743369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.048 [2024-11-20 06:45:49.743378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.048 [2024-11-20 06:45:49.743383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.048 [2024-11-20 06:45:49.743390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.048 [2024-11-20 06:45:49.743399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.048 qpair failed and we were unable to recover it. 00:34:30.048 [2024-11-20 06:45:49.753307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.048 [2024-11-20 06:45:49.753346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.048 [2024-11-20 06:45:49.753356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.048 [2024-11-20 06:45:49.753361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.048 [2024-11-20 06:45:49.753365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.048 [2024-11-20 06:45:49.753375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.048 qpair failed and we were unable to recover it. 00:34:30.048 [2024-11-20 06:45:49.763331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.048 [2024-11-20 06:45:49.763388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.048 [2024-11-20 06:45:49.763399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.048 [2024-11-20 06:45:49.763404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.048 [2024-11-20 06:45:49.763409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.048 [2024-11-20 06:45:49.763419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.048 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.773417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.773481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.773491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.773495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.773500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.773509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.783425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.783511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.783530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.783536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.783541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.783555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.793405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.793450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.793469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.793475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.793480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.793494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.803432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.803519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.803531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.803536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.803541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.803552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.813419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.813492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.813503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.813508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.813513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.813523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.823565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.823619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.823638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.823645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.823649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.823663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.833527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.833567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.833582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.833587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.833592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.833603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.843450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.843510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.843521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.843526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.843530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.843541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.853631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.853677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.853688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.853692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.853697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.853707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.863693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.863789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.863800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.863805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.863810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.863820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.873635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.873675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.873685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.873690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.873698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.873707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.883677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.883723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.883735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.883740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.883748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.883759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.893736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.049 [2024-11-20 06:45:49.893791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.049 [2024-11-20 06:45:49.893802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.049 [2024-11-20 06:45:49.893807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.049 [2024-11-20 06:45:49.893811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.049 [2024-11-20 06:45:49.893821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.049 qpair failed and we were unable to recover it. 00:34:30.049 [2024-11-20 06:45:49.903782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.050 [2024-11-20 06:45:49.903881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.050 [2024-11-20 06:45:49.903891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.050 [2024-11-20 06:45:49.903896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.050 [2024-11-20 06:45:49.903900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.050 [2024-11-20 06:45:49.903910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.050 qpair failed and we were unable to recover it. 00:34:30.050 [2024-11-20 06:45:49.913728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.050 [2024-11-20 06:45:49.913801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.050 [2024-11-20 06:45:49.913811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.050 [2024-11-20 06:45:49.913816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.050 [2024-11-20 06:45:49.913821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.050 [2024-11-20 06:45:49.913830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.050 qpair failed and we were unable to recover it. 00:34:30.050 [2024-11-20 06:45:49.923810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.050 [2024-11-20 06:45:49.923898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.050 [2024-11-20 06:45:49.923908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.050 [2024-11-20 06:45:49.923913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.050 [2024-11-20 06:45:49.923918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.050 [2024-11-20 06:45:49.923927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.050 qpair failed and we were unable to recover it. 00:34:30.050 [2024-11-20 06:45:49.933847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.050 [2024-11-20 06:45:49.933943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.050 [2024-11-20 06:45:49.933953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.050 [2024-11-20 06:45:49.933958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.050 [2024-11-20 06:45:49.933963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.050 [2024-11-20 06:45:49.933972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.050 qpair failed and we were unable to recover it. 00:34:30.050 [2024-11-20 06:45:49.943850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.050 [2024-11-20 06:45:49.943899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.050 [2024-11-20 06:45:49.943910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.050 [2024-11-20 06:45:49.943915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.050 [2024-11-20 06:45:49.943920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.050 [2024-11-20 06:45:49.943930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.050 qpair failed and we were unable to recover it. 00:34:30.050 [2024-11-20 06:45:49.953875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.050 [2024-11-20 06:45:49.953921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.050 [2024-11-20 06:45:49.953931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.050 [2024-11-20 06:45:49.953936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.050 [2024-11-20 06:45:49.953940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.050 [2024-11-20 06:45:49.953950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.050 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:49.963826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:49.963899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:49.963912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:49.963917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:49.963921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:49.963930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:49.973980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:49.974038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:49.974048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:49.974053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:49.974057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:49.974066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:49.983997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:49.984044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:49.984054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:49.984059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:49.984063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:49.984072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:49.993850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:49.993891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:49.993901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:49.993906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:49.993910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:49.993920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:50.004034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:50.004077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:50.004089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:50.004094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:50.004101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:50.004112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:50.013929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:50.013979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:50.013991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:50.013997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:50.014002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:50.014013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:50.024101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:50.024150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:50.024161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:50.024166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:50.024171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:50.024181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:50.033953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.312 [2024-11-20 06:45:50.034000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.312 [2024-11-20 06:45:50.034010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.312 [2024-11-20 06:45:50.034016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.312 [2024-11-20 06:45:50.034020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.312 [2024-11-20 06:45:50.034029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.312 qpair failed and we were unable to recover it. 00:34:30.312 [2024-11-20 06:45:50.044102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.044148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.044160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.044166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.044170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.044181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.054086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.054179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.054190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.054194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.054199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.054209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.064250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.064316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.064327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.064331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.064336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.064346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.074100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.074145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.074156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.074162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.074166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.074177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.084223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.084269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.084281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.084286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.084291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.084301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.094312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.094372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.094388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.094393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.094398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.094408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.104339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.104390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.104401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.104406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.104411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.104421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.114341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.114409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.114420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.114425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.114430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.114439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.124306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.124344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.124355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.124360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.124364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.124374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.134399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.134450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.134461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.134466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.134473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.134483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.144416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.144462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.144473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.144478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.144483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.144493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.154389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.154434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.154445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.154450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.154454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.154464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.164357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.164405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.164415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.164421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.164425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.164435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.174490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.174536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.174547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.174552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.174556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.174566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.184534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.184584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.184595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.184600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.184604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.184614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.194496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.194541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.194552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.194557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.194561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.194571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.204526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.204563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.204574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.204579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.204583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.204593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.214615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.214665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.214675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.214680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.214684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.214694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.313 [2024-11-20 06:45:50.224610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.313 [2024-11-20 06:45:50.224676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.313 [2024-11-20 06:45:50.224690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.313 [2024-11-20 06:45:50.224695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.313 [2024-11-20 06:45:50.224700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.313 [2024-11-20 06:45:50.224710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.313 qpair failed and we were unable to recover it. 00:34:30.574 [2024-11-20 06:45:50.234661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.574 [2024-11-20 06:45:50.234707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.574 [2024-11-20 06:45:50.234718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.574 [2024-11-20 06:45:50.234724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.574 [2024-11-20 06:45:50.234728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.234738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.244683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.244760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.244771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.244776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.244781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.244791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.254637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.254697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.254708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.254713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.254717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.254727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.264780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.264843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.264853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.264859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.264867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.264877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.274597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.274640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.274651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.274656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.274661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.274671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.284756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.284799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.284810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.284814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.284819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.284829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.294803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.294852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.294862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.294867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.294872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.294881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.304848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.304901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.304912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.304917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.304921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.304931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.314775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.314817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.314827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.314832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.314837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.314847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.324889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.324963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.324973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.324978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.324983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.324992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.334807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.334856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.334867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.334872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.334876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.334886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.344900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.344954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.344964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.344969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.344973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.344983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.355011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.355058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.575 [2024-11-20 06:45:50.355071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.575 [2024-11-20 06:45:50.355076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.575 [2024-11-20 06:45:50.355080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.575 [2024-11-20 06:45:50.355090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.575 qpair failed and we were unable to recover it. 00:34:30.575 [2024-11-20 06:45:50.364973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.575 [2024-11-20 06:45:50.365013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.365025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.365030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.365035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.365045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.375138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.375192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.375203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.375208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.375212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.375222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.385080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.385163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.385174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.385179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.385184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.385193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.395066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.395108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.395119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.395124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.395131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.395141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.405067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.405111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.405121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.405126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.405131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.405140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.415157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.415228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.415239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.415244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.415248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.415258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.425159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.425206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.425216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.425221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.425225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.425235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.435161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.435200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.435210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.435215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.435219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.435229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.445071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.445128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.445139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.445144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.445149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.445158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.455144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.455193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.455203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.455208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.455213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.455223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.465253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.465304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.465315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.465320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.465324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.465334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.475273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.475327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.475337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.475342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.475347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.475357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.576 [2024-11-20 06:45:50.485294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.576 [2024-11-20 06:45:50.485341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.576 [2024-11-20 06:45:50.485354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.576 [2024-11-20 06:45:50.485359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.576 [2024-11-20 06:45:50.485363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.576 [2024-11-20 06:45:50.485373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.576 qpair failed and we were unable to recover it. 00:34:30.838 [2024-11-20 06:45:50.495366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.838 [2024-11-20 06:45:50.495417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.838 [2024-11-20 06:45:50.495427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.838 [2024-11-20 06:45:50.495432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.838 [2024-11-20 06:45:50.495437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.838 [2024-11-20 06:45:50.495446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.838 qpair failed and we were unable to recover it. 00:34:30.838 [2024-11-20 06:45:50.505228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.838 [2024-11-20 06:45:50.505274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.838 [2024-11-20 06:45:50.505285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.838 [2024-11-20 06:45:50.505290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.838 [2024-11-20 06:45:50.505294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.838 [2024-11-20 06:45:50.505304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.838 qpair failed and we were unable to recover it. 00:34:30.838 [2024-11-20 06:45:50.515334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.838 [2024-11-20 06:45:50.515373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.838 [2024-11-20 06:45:50.515385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.838 [2024-11-20 06:45:50.515391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.838 [2024-11-20 06:45:50.515396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.838 [2024-11-20 06:45:50.515412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.838 qpair failed and we were unable to recover it. 00:34:30.838 [2024-11-20 06:45:50.525393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.838 [2024-11-20 06:45:50.525447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.838 [2024-11-20 06:45:50.525458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.838 [2024-11-20 06:45:50.525466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.838 [2024-11-20 06:45:50.525476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.838 [2024-11-20 06:45:50.525486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.838 qpair failed and we were unable to recover it. 00:34:30.838 [2024-11-20 06:45:50.535488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.838 [2024-11-20 06:45:50.535568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.838 [2024-11-20 06:45:50.535579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.838 [2024-11-20 06:45:50.535584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.838 [2024-11-20 06:45:50.535588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.838 [2024-11-20 06:45:50.535597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.838 qpair failed and we were unable to recover it. 00:34:30.838 [2024-11-20 06:45:50.545463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.838 [2024-11-20 06:45:50.545551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.838 [2024-11-20 06:45:50.545562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.838 [2024-11-20 06:45:50.545566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.838 [2024-11-20 06:45:50.545571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.838 [2024-11-20 06:45:50.545580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.838 qpair failed and we were unable to recover it. 00:34:30.838 [2024-11-20 06:45:50.555341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.838 [2024-11-20 06:45:50.555380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.555391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.555396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.555400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.555410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.565475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.565516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.565527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.565531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.565536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.565545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.575541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.575603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.575622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.575628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.575633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.575647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.585549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.585598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.585612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.585617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.585622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.585633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.595579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.595623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.595635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.595640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.595644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.595655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.605586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.605628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.605639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.605644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.605649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.605659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.615651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.615720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.615734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.615740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.615744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.615759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.625667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.625710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.625721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.625726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.625730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.625740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.635542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.635582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.635593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.635598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.635602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.635612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.645635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.645676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.645689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.645695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.645701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.645712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.655799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.655869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.655880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.655885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.655893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.655903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.665794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.665881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.665892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.665897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.665902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.839 [2024-11-20 06:45:50.665911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.839 qpair failed and we were unable to recover it. 00:34:30.839 [2024-11-20 06:45:50.675791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.839 [2024-11-20 06:45:50.675835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.839 [2024-11-20 06:45:50.675845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.839 [2024-11-20 06:45:50.675850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.839 [2024-11-20 06:45:50.675854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.675865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-11-20 06:45:50.685778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-11-20 06:45:50.685818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-11-20 06:45:50.685829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-11-20 06:45:50.685834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-11-20 06:45:50.685839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.685848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-11-20 06:45:50.695862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-11-20 06:45:50.695912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-11-20 06:45:50.695922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-11-20 06:45:50.695927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-11-20 06:45:50.695932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.695942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-11-20 06:45:50.705778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-11-20 06:45:50.705837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-11-20 06:45:50.705849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-11-20 06:45:50.705854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-11-20 06:45:50.705858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.705869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-11-20 06:45:50.715902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-11-20 06:45:50.715944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-11-20 06:45:50.715956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-11-20 06:45:50.715961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-11-20 06:45:50.715966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.715976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-11-20 06:45:50.725919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-11-20 06:45:50.725961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-11-20 06:45:50.725972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-11-20 06:45:50.725977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-11-20 06:45:50.725982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.725991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-11-20 06:45:50.735979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-11-20 06:45:50.736027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-11-20 06:45:50.736037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-11-20 06:45:50.736043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-11-20 06:45:50.736047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.736057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:30.840 [2024-11-20 06:45:50.746022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.840 [2024-11-20 06:45:50.746118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.840 [2024-11-20 06:45:50.746134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.840 [2024-11-20 06:45:50.746139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.840 [2024-11-20 06:45:50.746143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:30.840 [2024-11-20 06:45:50.746153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:30.840 qpair failed and we were unable to recover it. 00:34:31.101 [2024-11-20 06:45:50.756032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.101 [2024-11-20 06:45:50.756076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.101 [2024-11-20 06:45:50.756088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.101 [2024-11-20 06:45:50.756093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.756098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.756108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.766028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.766067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.766078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.766084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.766088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.766098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.776083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.776135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.776145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.776150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.776154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.776164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.786151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.786217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.786228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.786233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.786240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.786250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.796080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.796122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.796132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.796137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.796142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.796151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.806139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.806181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.806192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.806197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.806202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.806211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.816220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.816269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.816279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.816284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.816288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.816298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.826241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.826322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.826332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.826337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.826342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.826352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.836190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.836231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.836242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.836247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.836251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.836261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.846247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.846291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.846301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.846306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.846311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.846320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.856370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.856421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.856432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.856437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.856441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.856451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.866309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.102 [2024-11-20 06:45:50.866353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.102 [2024-11-20 06:45:50.866364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.102 [2024-11-20 06:45:50.866369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.102 [2024-11-20 06:45:50.866374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.102 [2024-11-20 06:45:50.866383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.102 qpair failed and we were unable to recover it. 00:34:31.102 [2024-11-20 06:45:50.876280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.876350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.876363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.876368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.876373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.876382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.886393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.886437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.886448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.886453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.886457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.886467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.896302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.896364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.896374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.896379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.896384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.896394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.906410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.906452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.906463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.906468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.906473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.906483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.916301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.916339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.916349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.916354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.916362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.916372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.926438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.926483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.926493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.926498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.926503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.926512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.936437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.936494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.936504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.936510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.936514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.936524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.946550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.946602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.946623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.946629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.946634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.946647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.956563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.956609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.956621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.956626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.956630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.956641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.966551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.966598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.966618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.966624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.966629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.966643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.976634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.976683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.976694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.976700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.976704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.976715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.986652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.986697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.986708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.986713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.986717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.986727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:50.996627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.103 [2024-11-20 06:45:50.996667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.103 [2024-11-20 06:45:50.996677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.103 [2024-11-20 06:45:50.996682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.103 [2024-11-20 06:45:50.996686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.103 [2024-11-20 06:45:50.996696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.103 qpair failed and we were unable to recover it. 00:34:31.103 [2024-11-20 06:45:51.006674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.104 [2024-11-20 06:45:51.006741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.104 [2024-11-20 06:45:51.006758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.104 [2024-11-20 06:45:51.006763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.104 [2024-11-20 06:45:51.006768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.104 [2024-11-20 06:45:51.006777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.104 qpair failed and we were unable to recover it. 00:34:31.104 [2024-11-20 06:45:51.016608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.016658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.016670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.016675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.366 [2024-11-20 06:45:51.016680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.366 [2024-11-20 06:45:51.016689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.366 qpair failed and we were unable to recover it. 00:34:31.366 [2024-11-20 06:45:51.026604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.026652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.026663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.026668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.366 [2024-11-20 06:45:51.026672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.366 [2024-11-20 06:45:51.026682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.366 qpair failed and we were unable to recover it. 00:34:31.366 [2024-11-20 06:45:51.036760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.036802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.036812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.036817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.366 [2024-11-20 06:45:51.036821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.366 [2024-11-20 06:45:51.036831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.366 qpair failed and we were unable to recover it. 00:34:31.366 [2024-11-20 06:45:51.046649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.046689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.046701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.046706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.366 [2024-11-20 06:45:51.046714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.366 [2024-11-20 06:45:51.046725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.366 qpair failed and we were unable to recover it. 00:34:31.366 [2024-11-20 06:45:51.056729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.056789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.056800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.056805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.366 [2024-11-20 06:45:51.056810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.366 [2024-11-20 06:45:51.056820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.366 qpair failed and we were unable to recover it. 00:34:31.366 [2024-11-20 06:45:51.066713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.066790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.066801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.066805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.366 [2024-11-20 06:45:51.066810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.366 [2024-11-20 06:45:51.066819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.366 qpair failed and we were unable to recover it. 00:34:31.366 [2024-11-20 06:45:51.076869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.076951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.076961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.076966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.366 [2024-11-20 06:45:51.076970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.366 [2024-11-20 06:45:51.076980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.366 qpair failed and we were unable to recover it. 00:34:31.366 [2024-11-20 06:45:51.086859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.366 [2024-11-20 06:45:51.086898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.366 [2024-11-20 06:45:51.086908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.366 [2024-11-20 06:45:51.086913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.086917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.086926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.096977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.097024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.097033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.097038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.097042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.097052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.106822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.106865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.106875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.106880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.106884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.106893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.116841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.116883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.116894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.116899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.116903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.116913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.126860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.126905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.126915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.126920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.126925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.126934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.137079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.137128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.137141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.137145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.137150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.137159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.147061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.147109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.147119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.147124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.147128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.147137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.157048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.157091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.157100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.157105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.157109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.157118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.167109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.167149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.167159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.167163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.167168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.167177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.177233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.177293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.177303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.177308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.177315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.177325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.187175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.187221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.187231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.187236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.187240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.187249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.197182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.197235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.197244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.197249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.367 [2024-11-20 06:45:51.197253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.367 [2024-11-20 06:45:51.197262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.367 qpair failed and we were unable to recover it. 00:34:31.367 [2024-11-20 06:45:51.207198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.367 [2024-11-20 06:45:51.207238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.367 [2024-11-20 06:45:51.207248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.367 [2024-11-20 06:45:51.207252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.207257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.207266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.368 [2024-11-20 06:45:51.217268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.368 [2024-11-20 06:45:51.217317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.368 [2024-11-20 06:45:51.217327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.368 [2024-11-20 06:45:51.217331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.217336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.217345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.368 [2024-11-20 06:45:51.227271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.368 [2024-11-20 06:45:51.227314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.368 [2024-11-20 06:45:51.227324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.368 [2024-11-20 06:45:51.227329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.227333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.227342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.368 [2024-11-20 06:45:51.237295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.368 [2024-11-20 06:45:51.237341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.368 [2024-11-20 06:45:51.237350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.368 [2024-11-20 06:45:51.237355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.237360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.237369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.368 [2024-11-20 06:45:51.247335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.368 [2024-11-20 06:45:51.247375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.368 [2024-11-20 06:45:51.247385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.368 [2024-11-20 06:45:51.247390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.247394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.247403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.368 [2024-11-20 06:45:51.257433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.368 [2024-11-20 06:45:51.257483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.368 [2024-11-20 06:45:51.257493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.368 [2024-11-20 06:45:51.257498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.257502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.257511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.368 [2024-11-20 06:45:51.267396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.368 [2024-11-20 06:45:51.267438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.368 [2024-11-20 06:45:51.267451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.368 [2024-11-20 06:45:51.267457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.267461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.267471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.368 [2024-11-20 06:45:51.277416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.368 [2024-11-20 06:45:51.277493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.368 [2024-11-20 06:45:51.277503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.368 [2024-11-20 06:45:51.277508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.368 [2024-11-20 06:45:51.277512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.368 [2024-11-20 06:45:51.277521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.368 qpair failed and we were unable to recover it. 00:34:31.630 [2024-11-20 06:45:51.287430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.630 [2024-11-20 06:45:51.287469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.630 [2024-11-20 06:45:51.287479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.630 [2024-11-20 06:45:51.287484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.630 [2024-11-20 06:45:51.287488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.630 [2024-11-20 06:45:51.287498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.630 qpair failed and we were unable to recover it. 00:34:31.630 [2024-11-20 06:45:51.297510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.630 [2024-11-20 06:45:51.297556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.630 [2024-11-20 06:45:51.297566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.630 [2024-11-20 06:45:51.297571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.630 [2024-11-20 06:45:51.297575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.630 [2024-11-20 06:45:51.297585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.630 qpair failed and we were unable to recover it. 00:34:31.630 [2024-11-20 06:45:51.307492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.630 [2024-11-20 06:45:51.307536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.630 [2024-11-20 06:45:51.307546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.630 [2024-11-20 06:45:51.307551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.630 [2024-11-20 06:45:51.307558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.630 [2024-11-20 06:45:51.307567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.630 qpair failed and we were unable to recover it. 00:34:31.630 [2024-11-20 06:45:51.317429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.630 [2024-11-20 06:45:51.317471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.630 [2024-11-20 06:45:51.317481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.630 [2024-11-20 06:45:51.317486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.630 [2024-11-20 06:45:51.317490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.317499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.327445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.327506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.327516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.327521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.327525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.327534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.337491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.337543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.337553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.337558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.337562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.337572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.347615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.347667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.347686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.347692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.347697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.347711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.357633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.357675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.357687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.357692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.357697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.357707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.367511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.367552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.367562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.367567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.367571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.367581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.377575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.377624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.377634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.377639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.377643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.377653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.387709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.387756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.387767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.387772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.387776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.387786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.397705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.397749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.397766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.397771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.397775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.397785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.407710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.407754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.407765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.407770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.407774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.407783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.417790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.417850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.417859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.417864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.417869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.417878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.427694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.427737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.427753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.427758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.427762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.427773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.437838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.437879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.437889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.437894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.437901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.631 [2024-11-20 06:45:51.437911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.631 qpair failed and we were unable to recover it. 00:34:31.631 [2024-11-20 06:45:51.447846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.631 [2024-11-20 06:45:51.447901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.631 [2024-11-20 06:45:51.447912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.631 [2024-11-20 06:45:51.447917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.631 [2024-11-20 06:45:51.447921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.447932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.457919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.457966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.457978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.457982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.457987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.457997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.467938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.467982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.467992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.467997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.468001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.468011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.477921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.477996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.478006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.478011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.478015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.478025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.487970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.488011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.488021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.488026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.488030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.488039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.498055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.498132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.498142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.498147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.498151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.498161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.508034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.508082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.508094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.508099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.508103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.508113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.518060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.518101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.518112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.518118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.518123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.518133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.528076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.528121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.528134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.528139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.528144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.528154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.632 [2024-11-20 06:45:51.538156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.632 [2024-11-20 06:45:51.538207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.632 [2024-11-20 06:45:51.538218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.632 [2024-11-20 06:45:51.538223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.632 [2024-11-20 06:45:51.538227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.632 [2024-11-20 06:45:51.538237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.632 qpair failed and we were unable to recover it. 00:34:31.894 [2024-11-20 06:45:51.548000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.894 [2024-11-20 06:45:51.548043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.894 [2024-11-20 06:45:51.548053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.894 [2024-11-20 06:45:51.548058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.894 [2024-11-20 06:45:51.548062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.894 [2024-11-20 06:45:51.548072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.894 qpair failed and we were unable to recover it. 00:34:31.894 [2024-11-20 06:45:51.558158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.894 [2024-11-20 06:45:51.558248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.894 [2024-11-20 06:45:51.558258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.894 [2024-11-20 06:45:51.558263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.894 [2024-11-20 06:45:51.558267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.894 [2024-11-20 06:45:51.558276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.894 qpair failed and we were unable to recover it. 00:34:31.894 [2024-11-20 06:45:51.568167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.894 [2024-11-20 06:45:51.568212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.894 [2024-11-20 06:45:51.568222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.894 [2024-11-20 06:45:51.568226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.894 [2024-11-20 06:45:51.568234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.894 [2024-11-20 06:45:51.568243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.894 qpair failed and we were unable to recover it. 00:34:31.894 [2024-11-20 06:45:51.578118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.894 [2024-11-20 06:45:51.578168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.894 [2024-11-20 06:45:51.578178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.894 [2024-11-20 06:45:51.578182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.894 [2024-11-20 06:45:51.578187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.894 [2024-11-20 06:45:51.578196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.894 qpair failed and we were unable to recover it. 00:34:31.894 [2024-11-20 06:45:51.588255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.894 [2024-11-20 06:45:51.588303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.894 [2024-11-20 06:45:51.588313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.894 [2024-11-20 06:45:51.588318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.894 [2024-11-20 06:45:51.588322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.894 [2024-11-20 06:45:51.588331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.894 qpair failed and we were unable to recover it. 00:34:31.894 [2024-11-20 06:45:51.598264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.894 [2024-11-20 06:45:51.598353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.894 [2024-11-20 06:45:51.598362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.894 [2024-11-20 06:45:51.598367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.894 [2024-11-20 06:45:51.598372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.894 [2024-11-20 06:45:51.598381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.894 qpair failed and we were unable to recover it. 00:34:31.894 [2024-11-20 06:45:51.608282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.894 [2024-11-20 06:45:51.608329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.894 [2024-11-20 06:45:51.608340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.894 [2024-11-20 06:45:51.608344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.608349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.608358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.618293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.618341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.618350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.618355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.618359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.618369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.628363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.628413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.628423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.628428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.628432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.628442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.638363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.638419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.638429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.638434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.638438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.638448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.648399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.648442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.648451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.648456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.648461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.648470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.658444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.658497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.658509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.658514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.658518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.658528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.668521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.668591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.668600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.668605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.668609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.668618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.678495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.678543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.678563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.678569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.678574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.678587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.688491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.688538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.688558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.688564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.688569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.688583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.698542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.698592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.698604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.698610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.698618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.698628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.708576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.708622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.708633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.708638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.708642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.708652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.718594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.718644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.718655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.718660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.718664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.718674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.728635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.728725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.728735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.895 [2024-11-20 06:45:51.728740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.895 [2024-11-20 06:45:51.728748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.895 [2024-11-20 06:45:51.728758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.895 qpair failed and we were unable to recover it. 00:34:31.895 [2024-11-20 06:45:51.738691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.895 [2024-11-20 06:45:51.738773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.895 [2024-11-20 06:45:51.738784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.738789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.738793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.738803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:31.896 [2024-11-20 06:45:51.748674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.896 [2024-11-20 06:45:51.748716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.896 [2024-11-20 06:45:51.748726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.748731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.748735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.748748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:31.896 [2024-11-20 06:45:51.758712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.896 [2024-11-20 06:45:51.758758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.896 [2024-11-20 06:45:51.758768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.758773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.758777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.758787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:31.896 [2024-11-20 06:45:51.768699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.896 [2024-11-20 06:45:51.768741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.896 [2024-11-20 06:45:51.768755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.768760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.768765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.768775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:31.896 [2024-11-20 06:45:51.778802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.896 [2024-11-20 06:45:51.778854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.896 [2024-11-20 06:45:51.778868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.778873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.778877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.778887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:31.896 [2024-11-20 06:45:51.788658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.896 [2024-11-20 06:45:51.788706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.896 [2024-11-20 06:45:51.788719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.788724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.788728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.788738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:31.896 [2024-11-20 06:45:51.798706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.896 [2024-11-20 06:45:51.798766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.896 [2024-11-20 06:45:51.798776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.798781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.798785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.798795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:31.896 [2024-11-20 06:45:51.808816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.896 [2024-11-20 06:45:51.808874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.896 [2024-11-20 06:45:51.808884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.896 [2024-11-20 06:45:51.808889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.896 [2024-11-20 06:45:51.808894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb2f010 00:34:31.896 [2024-11-20 06:45:51.808903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:31.896 qpair failed and we were unable to recover it. 00:34:32.157 [2024-11-20 06:45:51.818843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.157 [2024-11-20 06:45:51.818978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.157 [2024-11-20 06:45:51.819044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.157 [2024-11-20 06:45:51.819069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.157 [2024-11-20 06:45:51.819090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54d8000b90 00:34:32.157 [2024-11-20 06:45:51.819147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.157 qpair failed and we were unable to recover it. 00:34:32.157 [2024-11-20 06:45:51.828887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.157 [2024-11-20 06:45:51.828975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.157 [2024-11-20 06:45:51.829012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.157 [2024-11-20 06:45:51.829031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.157 [2024-11-20 06:45:51.829058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f54d8000b90 00:34:32.157 [2024-11-20 06:45:51.829098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.157 qpair failed and we were unable to recover it. 00:34:32.157 [2024-11-20 06:45:51.829242] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:34:32.157 A controller has encountered a failure and is being reset. 00:34:32.157 [2024-11-20 06:45:51.829409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3cf30 (9): Bad file descriptor 00:34:32.157 Controller properly reset. 00:34:32.157 [2024-11-20 06:45:51.860035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x742e60 is same with the state(6) to be set 00:34:32.157 Initializing NVMe Controllers 00:34:32.157 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:32.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:32.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:32.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:32.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:32.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:32.157 Initialization complete. Launching workers. 00:34:32.157 Starting thread on core 1 00:34:32.157 Starting thread on core 2 00:34:32.157 Starting thread on core 3 00:34:32.157 Starting thread on core 0 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:32.158 00:34:32.158 real 0m11.325s 00:34:32.158 user 0m21.658s 00:34:32.158 sys 0m3.900s 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:32.158 ************************************ 00:34:32.158 END TEST nvmf_target_disconnect_tc2 00:34:32.158 ************************************ 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.158 rmmod nvme_tcp 00:34:32.158 rmmod nvme_fabrics 00:34:32.158 rmmod nvme_keyring 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2906905 ']' 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2906905 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2906905 ']' 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 2906905 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:32.158 06:45:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2906905 00:34:32.158 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:34:32.158 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:34:32.158 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2906905' 00:34:32.158 killing process with pid 2906905 00:34:32.158 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 2906905 00:34:32.158 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 2906905 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.419 06:45:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.963 06:45:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:34.963 00:34:34.963 real 0m21.844s 00:34:34.963 user 0m49.021s 00:34:34.963 sys 0m10.174s 00:34:34.963 06:45:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:34.963 06:45:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:34.963 ************************************ 00:34:34.963 END TEST nvmf_target_disconnect 00:34:34.963 ************************************ 00:34:34.963 06:45:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:34.963 00:34:34.963 real 6m35.252s 00:34:34.963 user 11m22.490s 00:34:34.963 sys 2m17.141s 00:34:34.963 06:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:34.963 06:45:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.963 ************************************ 00:34:34.963 END TEST nvmf_host 00:34:34.963 ************************************ 00:34:34.963 06:45:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:34:34.963 06:45:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:34:34.963 06:45:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:34.963 06:45:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:34.963 06:45:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:34.963 06:45:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:34.963 ************************************ 00:34:34.963 START TEST nvmf_target_core_interrupt_mode 00:34:34.963 ************************************ 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:34.963 * Looking for test storage... 00:34:34.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.963 --rc genhtml_branch_coverage=1 00:34:34.963 --rc genhtml_function_coverage=1 00:34:34.963 --rc genhtml_legend=1 00:34:34.963 --rc geninfo_all_blocks=1 00:34:34.963 --rc geninfo_unexecuted_blocks=1 00:34:34.963 00:34:34.963 ' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.963 --rc genhtml_branch_coverage=1 00:34:34.963 --rc genhtml_function_coverage=1 00:34:34.963 --rc genhtml_legend=1 00:34:34.963 --rc geninfo_all_blocks=1 00:34:34.963 --rc geninfo_unexecuted_blocks=1 00:34:34.963 00:34:34.963 ' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.963 --rc genhtml_branch_coverage=1 00:34:34.963 --rc genhtml_function_coverage=1 00:34:34.963 --rc genhtml_legend=1 00:34:34.963 --rc geninfo_all_blocks=1 00:34:34.963 --rc geninfo_unexecuted_blocks=1 00:34:34.963 00:34:34.963 ' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.963 --rc genhtml_branch_coverage=1 00:34:34.963 --rc genhtml_function_coverage=1 00:34:34.963 --rc genhtml_legend=1 00:34:34.963 --rc geninfo_all_blocks=1 00:34:34.963 --rc geninfo_unexecuted_blocks=1 00:34:34.963 00:34:34.963 ' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.963 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:34.964 ************************************ 00:34:34.964 START TEST nvmf_abort 00:34:34.964 ************************************ 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:34.964 * Looking for test storage... 00:34:34.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.964 --rc genhtml_branch_coverage=1 00:34:34.964 --rc genhtml_function_coverage=1 00:34:34.964 --rc genhtml_legend=1 00:34:34.964 --rc geninfo_all_blocks=1 00:34:34.964 --rc geninfo_unexecuted_blocks=1 00:34:34.964 00:34:34.964 ' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.964 --rc genhtml_branch_coverage=1 00:34:34.964 --rc genhtml_function_coverage=1 00:34:34.964 --rc genhtml_legend=1 00:34:34.964 --rc geninfo_all_blocks=1 00:34:34.964 --rc geninfo_unexecuted_blocks=1 00:34:34.964 00:34:34.964 ' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.964 --rc genhtml_branch_coverage=1 00:34:34.964 --rc genhtml_function_coverage=1 00:34:34.964 --rc genhtml_legend=1 00:34:34.964 --rc geninfo_all_blocks=1 00:34:34.964 --rc geninfo_unexecuted_blocks=1 00:34:34.964 00:34:34.964 ' 00:34:34.964 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:34.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.964 --rc genhtml_branch_coverage=1 00:34:34.965 --rc genhtml_function_coverage=1 00:34:34.965 --rc genhtml_legend=1 00:34:34.965 --rc geninfo_all_blocks=1 00:34:34.965 --rc geninfo_unexecuted_blocks=1 00:34:34.965 00:34:34.965 ' 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.965 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.226 06:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:43.366 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:43.366 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:43.366 Found net devices under 0000:31:00.0: cvl_0_0 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.366 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:43.367 Found net devices under 0000:31:00.1: cvl_0_1 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:34:43.367 00:34:43.367 --- 10.0.0.2 ping statistics --- 00:34:43.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.367 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:34:43.367 00:34:43.367 --- 10.0.0.1 ping statistics --- 00:34:43.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.367 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2912376 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2912376 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2912376 ']' 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:43.367 06:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.367 [2024-11-20 06:46:02.499526] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:43.367 [2024-11-20 06:46:02.500680] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:43.367 [2024-11-20 06:46:02.500733] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.367 [2024-11-20 06:46:02.602328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:43.367 [2024-11-20 06:46:02.653922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.367 [2024-11-20 06:46:02.653972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.367 [2024-11-20 06:46:02.653986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.367 [2024-11-20 06:46:02.653993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.367 [2024-11-20 06:46:02.653999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.367 [2024-11-20 06:46:02.656128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:43.367 [2024-11-20 06:46:02.656290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.367 [2024-11-20 06:46:02.656290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.367 [2024-11-20 06:46:02.734057] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:43.367 [2024-11-20 06:46:02.735080] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:43.367 [2024-11-20 06:46:02.735597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:43.367 [2024-11-20 06:46:02.735735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 [2024-11-20 06:46:03.369193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 Malloc0 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 Delay0 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 [2024-11-20 06:46:03.469156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.629 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:34:43.890 [2024-11-20 06:46:03.611933] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:45.804 Initializing NVMe Controllers 00:34:45.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:45.804 controller IO queue size 128 less than required 00:34:45.804 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:34:45.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:34:45.804 Initialization complete. Launching workers. 00:34:45.804 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28667 00:34:45.804 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28724, failed to submit 66 00:34:45.804 success 28667, unsuccessful 57, failed 0 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:45.804 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:45.804 rmmod nvme_tcp 00:34:46.065 rmmod nvme_fabrics 00:34:46.065 rmmod nvme_keyring 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2912376 ']' 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2912376 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2912376 ']' 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2912376 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2912376 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2912376' 00:34:46.065 killing process with pid 2912376 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2912376 00:34:46.065 06:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2912376 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:46.326 06:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.239 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:48.239 00:34:48.239 real 0m13.446s 00:34:48.239 user 0m11.092s 00:34:48.239 sys 0m6.870s 00:34:48.239 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:48.239 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:34:48.240 ************************************ 00:34:48.240 END TEST nvmf_abort 00:34:48.240 ************************************ 00:34:48.240 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:48.240 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:48.240 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:48.240 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 ************************************ 00:34:48.503 START TEST nvmf_ns_hotplug_stress 00:34:48.503 ************************************ 00:34:48.503 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:34:48.503 * Looking for test storage... 00:34:48.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.503 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:48.503 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:34:48.503 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:48.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.504 --rc genhtml_branch_coverage=1 00:34:48.504 --rc genhtml_function_coverage=1 00:34:48.504 --rc genhtml_legend=1 00:34:48.504 --rc geninfo_all_blocks=1 00:34:48.504 --rc geninfo_unexecuted_blocks=1 00:34:48.504 00:34:48.504 ' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:48.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.504 --rc genhtml_branch_coverage=1 00:34:48.504 --rc genhtml_function_coverage=1 00:34:48.504 --rc genhtml_legend=1 00:34:48.504 --rc geninfo_all_blocks=1 00:34:48.504 --rc geninfo_unexecuted_blocks=1 00:34:48.504 00:34:48.504 ' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:48.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.504 --rc genhtml_branch_coverage=1 00:34:48.504 --rc genhtml_function_coverage=1 00:34:48.504 --rc genhtml_legend=1 00:34:48.504 --rc geninfo_all_blocks=1 00:34:48.504 --rc geninfo_unexecuted_blocks=1 00:34:48.504 00:34:48.504 ' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:48.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:48.504 --rc genhtml_branch_coverage=1 00:34:48.504 --rc genhtml_function_coverage=1 00:34:48.504 --rc genhtml_legend=1 00:34:48.504 --rc geninfo_all_blocks=1 00:34:48.504 --rc geninfo_unexecuted_blocks=1 00:34:48.504 00:34:48.504 ' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.504 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.505 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:48.505 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:48.505 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:48.505 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:48.505 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:34:48.767 06:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:56.915 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:56.915 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:56.915 Found net devices under 0000:31:00.0: cvl_0_0 00:34:56.915 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:56.916 Found net devices under 0000:31:00.1: cvl_0_1 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:56.916 06:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:56.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:34:56.916 00:34:56.916 --- 10.0.0.2 ping statistics --- 00:34:56.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.916 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:56.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:34:56.916 00:34:56.916 --- 10.0.0.1 ping statistics --- 00:34:56.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.916 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2917315 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2917315 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2917315 ']' 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:56.916 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:56.916 [2024-11-20 06:46:16.143953] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.916 [2024-11-20 06:46:16.145121] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:56.916 [2024-11-20 06:46:16.145171] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.916 [2024-11-20 06:46:16.245142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:56.916 [2024-11-20 06:46:16.296819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.916 [2024-11-20 06:46:16.296870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.916 [2024-11-20 06:46:16.296878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.916 [2024-11-20 06:46:16.296885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.916 [2024-11-20 06:46:16.296892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.916 [2024-11-20 06:46:16.298736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.916 [2024-11-20 06:46:16.298896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.916 [2024-11-20 06:46:16.299042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.917 [2024-11-20 06:46:16.377173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.917 [2024-11-20 06:46:16.378179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:56.917 [2024-11-20 06:46:16.378897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.917 [2024-11-20 06:46:16.378975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:34:57.178 06:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:57.440 [2024-11-20 06:46:17.164272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.440 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:57.702 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.702 [2024-11-20 06:46:17.524882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.702 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:57.963 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:34:58.225 Malloc0 00:34:58.225 06:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:58.225 Delay0 00:34:58.225 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:58.486 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:34:58.747 NULL1 00:34:58.747 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:34:59.007 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2917781 00:34:59.007 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:34:59.007 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:34:59.008 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.008 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:59.268 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:59.268 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:59.529 true 00:34:59.529 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:34:59.529 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:59.790 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:59.790 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:59.790 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:00.051 true 00:35:00.051 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:00.051 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:00.312 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:00.573 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:35:00.573 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:35:00.573 true 00:35:00.833 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:00.833 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:00.833 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:01.093 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:35:01.093 06:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:35:01.353 true 00:35:01.353 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:01.353 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:01.613 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:01.613 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:35:01.613 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:35:01.874 true 00:35:01.874 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:01.874 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:02.134 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:02.134 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:35:02.134 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:35:02.395 true 00:35:02.395 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:02.395 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:02.657 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:02.917 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:35:02.917 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:35:02.917 true 00:35:02.917 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:02.917 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:03.177 06:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:03.439 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:35:03.439 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:35:03.439 true 00:35:03.439 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:03.439 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:03.721 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:03.981 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:35:03.981 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:35:03.981 true 00:35:03.981 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:03.981 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:04.242 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:04.504 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:35:04.504 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:35:04.504 true 00:35:04.764 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:04.764 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:04.764 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:05.024 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:35:05.024 06:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:35:05.285 true 00:35:05.285 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:05.285 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:05.285 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:05.545 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:35:05.545 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:35:05.806 true 00:35:05.806 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:05.806 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:06.066 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:06.066 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:35:06.066 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:35:06.327 true 00:35:06.327 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:06.327 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:06.589 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:06.589 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:35:06.589 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:35:06.850 true 00:35:06.850 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:06.851 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:07.111 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:07.371 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:35:07.371 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:35:07.371 true 00:35:07.371 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:07.371 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:07.632 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:07.892 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:35:07.892 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:35:07.892 true 00:35:07.892 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:07.892 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:08.152 06:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:08.411 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:35:08.411 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:35:08.411 true 00:35:08.671 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:08.671 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:08.671 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:08.930 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:35:08.930 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:35:09.189 true 00:35:09.189 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:09.189 06:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:09.189 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:09.452 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:35:09.452 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:35:09.720 true 00:35:09.720 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:09.720 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:09.978 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:09.978 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:35:09.978 06:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:35:10.237 true 00:35:10.237 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:10.237 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:10.498 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:10.498 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:35:10.498 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:35:10.757 true 00:35:10.757 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:10.757 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:11.017 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:11.276 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:35:11.276 06:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:35:11.276 true 00:35:11.276 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:11.276 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:11.534 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:11.793 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:35:11.793 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:35:11.793 true 00:35:11.793 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:11.793 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:12.059 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:12.349 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:35:12.349 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:35:12.349 true 00:35:12.349 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:12.349 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:12.683 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:12.942 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:35:12.942 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:35:12.942 true 00:35:12.942 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:12.942 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:13.202 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:13.462 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:35:13.462 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:35:13.462 true 00:35:13.462 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:13.462 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:13.722 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:13.983 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:35:13.983 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:35:13.983 true 00:35:14.243 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:14.243 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:14.243 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:14.502 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:35:14.502 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:35:14.761 true 00:35:14.761 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:14.761 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:14.761 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:15.021 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:35:15.021 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:35:15.280 true 00:35:15.280 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:15.280 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:15.539 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:15.539 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:35:15.539 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:35:15.799 true 00:35:15.799 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:15.799 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:16.060 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:16.060 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:35:16.060 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:35:16.320 true 00:35:16.320 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:16.320 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:16.580 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:16.840 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:35:16.840 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:35:16.840 true 00:35:16.840 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:16.840 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:17.099 06:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:17.358 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:35:17.358 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:35:17.358 true 00:35:17.358 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:17.358 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:17.618 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:17.877 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:35:17.877 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:35:17.877 true 00:35:18.136 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:18.136 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:18.136 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:18.396 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:35:18.396 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:35:18.656 true 00:35:18.656 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:18.656 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:18.656 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:18.915 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:35:18.915 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:35:19.175 true 00:35:19.175 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:19.175 06:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.434 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:19.434 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:35:19.434 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:35:19.693 true 00:35:19.693 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:19.693 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.953 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:20.212 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:35:20.212 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:35:20.212 true 00:35:20.212 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:20.212 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:20.471 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:20.731 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:35:20.731 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:35:20.731 true 00:35:20.731 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:20.731 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:20.992 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:21.253 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:35:21.253 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:35:21.253 true 00:35:21.514 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:21.514 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:21.514 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:21.774 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:35:21.774 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:35:22.034 true 00:35:22.034 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:22.034 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:22.034 06:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:22.294 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:35:22.294 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:35:22.554 true 00:35:22.554 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:22.554 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:22.814 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:22.814 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:35:22.814 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:35:23.074 true 00:35:23.074 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:23.074 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:23.334 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:23.334 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:35:23.334 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:35:23.594 true 00:35:23.594 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:23.594 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:23.854 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:23.854 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:35:23.854 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:35:24.115 true 00:35:24.115 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:24.115 06:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:24.376 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:24.636 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:35:24.636 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:35:24.636 true 00:35:24.636 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:24.636 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:24.896 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:25.156 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:35:25.156 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:35:25.156 true 00:35:25.156 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:25.156 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:25.416 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:25.676 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:35:25.676 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:35:25.676 true 00:35:25.676 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:25.676 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:25.935 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:26.195 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:35:26.195 06:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:35:26.455 true 00:35:26.455 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:26.455 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:26.455 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:26.715 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:35:26.715 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:35:26.975 true 00:35:26.975 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:26.975 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:26.975 06:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:27.235 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:35:27.235 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:35:27.494 true 00:35:27.494 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:27.494 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:27.753 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:27.753 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:35:27.753 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:35:28.013 true 00:35:28.013 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:28.013 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:28.273 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:28.532 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:35:28.532 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:35:28.532 true 00:35:28.532 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:28.532 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:28.791 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:29.052 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:35:29.052 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:35:29.052 true 00:35:29.052 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:29.052 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.312 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:29.312 Initializing NVMe Controllers 00:35:29.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:29.312 Controller IO queue size 128, less than required. 00:35:29.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:29.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:29.312 Initialization complete. Launching workers. 00:35:29.312 ======================================================== 00:35:29.312 Latency(us) 00:35:29.312 Device Information : IOPS MiB/s Average min max 00:35:29.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30291.10 14.79 4225.63 1121.80 11478.86 00:35:29.312 ======================================================== 00:35:29.312 Total : 30291.10 14.79 4225.63 1121.80 11478.86 00:35:29.312 00:35:29.572 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:35:29.572 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:35:29.572 true 00:35:29.572 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2917781 00:35:29.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2917781) - No such process 00:35:29.572 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2917781 00:35:29.572 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.832 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:35:30.092 null0 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:30.092 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:35:30.351 null1 00:35:30.351 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:30.351 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:30.351 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:35:30.611 null2 00:35:30.611 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:30.611 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:30.611 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:35:30.611 null3 00:35:30.611 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:30.611 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:30.611 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:35:30.870 null4 00:35:30.870 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:30.870 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:30.870 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:35:31.130 null5 00:35:31.130 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:31.130 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:31.130 06:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:35:31.390 null6 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:35:31.390 null7 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2923981 2923983 2923986 2923988 2923990 2923993 2923994 2923996 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.390 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:31.649 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:31.908 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:31.909 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.168 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.168 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:32.428 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:32.688 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.947 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:32.948 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:33.207 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:33.207 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:33.207 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.207 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:33.467 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.726 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.727 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.986 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:33.987 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:33.987 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:33.987 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:34.246 06:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:34.246 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:34.506 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:34.767 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:34.767 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.767 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:34.768 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.030 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.292 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.292 06:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:35.292 rmmod nvme_tcp 00:35:35.292 rmmod nvme_fabrics 00:35:35.292 rmmod nvme_keyring 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2917315 ']' 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2917315 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2917315 ']' 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2917315 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:35.292 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2917315 00:35:35.552 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:35.552 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:35.552 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2917315' 00:35:35.552 killing process with pid 2917315 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2917315 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2917315 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.553 06:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.095 00:35:38.095 real 0m49.302s 00:35:38.095 user 3m2.818s 00:35:38.095 sys 0m23.348s 00:35:38.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:38.096 ************************************ 00:35:38.096 END TEST nvmf_ns_hotplug_stress 00:35:38.096 ************************************ 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:38.096 ************************************ 00:35:38.096 START TEST nvmf_delete_subsystem 00:35:38.096 ************************************ 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:38.096 * Looking for test storage... 00:35:38.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.096 --rc genhtml_branch_coverage=1 00:35:38.096 --rc genhtml_function_coverage=1 00:35:38.096 --rc genhtml_legend=1 00:35:38.096 --rc geninfo_all_blocks=1 00:35:38.096 --rc geninfo_unexecuted_blocks=1 00:35:38.096 00:35:38.096 ' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.096 --rc genhtml_branch_coverage=1 00:35:38.096 --rc genhtml_function_coverage=1 00:35:38.096 --rc genhtml_legend=1 00:35:38.096 --rc geninfo_all_blocks=1 00:35:38.096 --rc geninfo_unexecuted_blocks=1 00:35:38.096 00:35:38.096 ' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.096 --rc genhtml_branch_coverage=1 00:35:38.096 --rc genhtml_function_coverage=1 00:35:38.096 --rc genhtml_legend=1 00:35:38.096 --rc geninfo_all_blocks=1 00:35:38.096 --rc geninfo_unexecuted_blocks=1 00:35:38.096 00:35:38.096 ' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.096 --rc genhtml_branch_coverage=1 00:35:38.096 --rc genhtml_function_coverage=1 00:35:38.096 --rc genhtml_legend=1 00:35:38.096 --rc geninfo_all_blocks=1 00:35:38.096 --rc geninfo_unexecuted_blocks=1 00:35:38.096 00:35:38.096 ' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.096 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.097 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:46.228 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:46.228 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:46.228 Found net devices under 0000:31:00.0: cvl_0_0 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.228 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:46.229 Found net devices under 0000:31:00.1: cvl_0_1 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:46.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:35:46.229 00:35:46.229 --- 10.0.0.2 ping statistics --- 00:35:46.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.229 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:35:46.229 00:35:46.229 --- 10.0.0.1 ping statistics --- 00:35:46.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.229 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2929152 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2929152 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2929152 ']' 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:46.229 06:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.229 [2024-11-20 06:47:05.507867] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:46.229 [2024-11-20 06:47:05.509047] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:46.229 [2024-11-20 06:47:05.509101] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.229 [2024-11-20 06:47:05.608415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:46.229 [2024-11-20 06:47:05.660087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.229 [2024-11-20 06:47:05.660138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.229 [2024-11-20 06:47:05.660147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.229 [2024-11-20 06:47:05.660154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.229 [2024-11-20 06:47:05.660160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.229 [2024-11-20 06:47:05.661811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.229 [2024-11-20 06:47:05.661853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.230 [2024-11-20 06:47:05.740419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:46.230 [2024-11-20 06:47:05.741129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:46.230 [2024-11-20 06:47:05.741364] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.491 [2024-11-20 06:47:06.374887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.491 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.751 [2024-11-20 06:47:06.407568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.751 NULL1 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.751 Delay0 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2929491 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:35:46.751 06:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:46.751 [2024-11-20 06:47:06.532137] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:48.659 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.659 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.659 06:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 [2024-11-20 06:47:08.582296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb842c0 is same with the state(6) to be set 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 Write completed with error (sct=0, sc=8) 00:35:48.920 starting I/O failed: -6 00:35:48.920 [2024-11-20 06:47:08.585812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6be8000c40 is same with the state(6) to be set 00:35:48.920 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Write completed with error (sct=0, sc=8) 00:35:48.921 Read completed with error (sct=0, sc=8) 00:35:49.859 [2024-11-20 06:47:09.548831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb855e0 is same with the state(6) to be set 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 [2024-11-20 06:47:09.586509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb840e0 is same with the state(6) to be set 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 [2024-11-20 06:47:09.586642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb844a0 is same with the state(6) to be set 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 [2024-11-20 06:47:09.587721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6be800d7e0 is same with the state(6) to be set 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Write completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.859 Read completed with error (sct=0, sc=8) 00:35:49.860 Read completed with error (sct=0, sc=8) 00:35:49.860 Read completed with error (sct=0, sc=8) 00:35:49.860 Read completed with error (sct=0, sc=8) 00:35:49.860 Read completed with error (sct=0, sc=8) 00:35:49.860 Write completed with error (sct=0, sc=8) 00:35:49.860 Read completed with error (sct=0, sc=8) 00:35:49.860 Write completed with error (sct=0, sc=8) 00:35:49.860 Read completed with error (sct=0, sc=8) 00:35:49.860 [2024-11-20 06:47:09.588001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6be800d020 is same with the state(6) to be set 00:35:49.860 Initializing NVMe Controllers 00:35:49.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:49.860 Controller IO queue size 128, less than required. 00:35:49.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:49.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:49.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:49.860 Initialization complete. Launching workers. 00:35:49.860 ======================================================== 00:35:49.860 Latency(us) 00:35:49.860 Device Information : IOPS MiB/s Average min max 00:35:49.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.65 0.08 893118.97 367.36 1007873.08 00:35:49.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.73 0.08 931208.53 309.87 1011278.54 00:35:49.860 ======================================================== 00:35:49.860 Total : 325.37 0.16 911231.89 309.87 1011278.54 00:35:49.860 00:35:49.860 [2024-11-20 06:47:09.588535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb855e0 (9): Bad file descriptor 00:35:49.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:35:49.860 06:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.860 06:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:35:49.860 06:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2929491 00:35:49.860 06:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2929491 00:35:50.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2929491) - No such process 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2929491 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2929491 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2929491 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.428 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:50.429 [2024-11-20 06:47:10.123300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2930170 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:50.429 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:50.429 [2024-11-20 06:47:10.222850] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:35:50.998 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:50.998 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:50.998 06:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:51.256 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:51.257 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:51.257 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:51.825 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:51.825 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:51.825 06:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:52.394 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:52.394 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:52.394 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:52.963 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:52.963 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:52.963 06:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:53.531 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:53.531 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:53.531 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:35:53.531 Initializing NVMe Controllers 00:35:53.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:53.531 Controller IO queue size 128, less than required. 00:35:53.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:53.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:53.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:53.532 Initialization complete. Launching workers. 00:35:53.532 ======================================================== 00:35:53.532 Latency(us) 00:35:53.532 Device Information : IOPS MiB/s Average min max 00:35:53.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003138.76 1000191.03 1041815.18 00:35:53.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004080.63 1000187.78 1010481.10 00:35:53.532 ======================================================== 00:35:53.532 Total : 256.00 0.12 1003609.69 1000187.78 1041815.18 00:35:53.532 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930170 00:35:53.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2930170) - No such process 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2930170 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.791 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.791 rmmod nvme_tcp 00:35:53.791 rmmod nvme_fabrics 00:35:54.050 rmmod nvme_keyring 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2929152 ']' 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2929152 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2929152 ']' 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2929152 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2929152 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2929152' 00:35:54.050 killing process with pid 2929152 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2929152 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2929152 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.050 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.051 06:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.591 06:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.591 00:35:56.591 real 0m18.437s 00:35:56.591 user 0m26.481s 00:35:56.591 sys 0m7.427s 00:35:56.591 06:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:56.591 06:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:35:56.591 ************************************ 00:35:56.591 END TEST nvmf_delete_subsystem 00:35:56.591 ************************************ 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:56.591 ************************************ 00:35:56.591 START TEST nvmf_host_management 00:35:56.591 ************************************ 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:35:56.591 * Looking for test storage... 00:35:56.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:35:56.591 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.592 --rc genhtml_branch_coverage=1 00:35:56.592 --rc genhtml_function_coverage=1 00:35:56.592 --rc genhtml_legend=1 00:35:56.592 --rc geninfo_all_blocks=1 00:35:56.592 --rc geninfo_unexecuted_blocks=1 00:35:56.592 00:35:56.592 ' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.592 --rc genhtml_branch_coverage=1 00:35:56.592 --rc genhtml_function_coverage=1 00:35:56.592 --rc genhtml_legend=1 00:35:56.592 --rc geninfo_all_blocks=1 00:35:56.592 --rc geninfo_unexecuted_blocks=1 00:35:56.592 00:35:56.592 ' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.592 --rc genhtml_branch_coverage=1 00:35:56.592 --rc genhtml_function_coverage=1 00:35:56.592 --rc genhtml_legend=1 00:35:56.592 --rc geninfo_all_blocks=1 00:35:56.592 --rc geninfo_unexecuted_blocks=1 00:35:56.592 00:35:56.592 ' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:56.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.592 --rc genhtml_branch_coverage=1 00:35:56.592 --rc genhtml_function_coverage=1 00:35:56.592 --rc genhtml_legend=1 00:35:56.592 --rc geninfo_all_blocks=1 00:35:56.592 --rc geninfo_unexecuted_blocks=1 00:35:56.592 00:35:56.592 ' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:56.592 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:35:56.593 06:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:04.838 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:04.838 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:04.838 Found net devices under 0000:31:00.0: cvl_0_0 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:04.838 Found net devices under 0000:31:00.1: cvl_0_1 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:04.838 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:04.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:04.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:36:04.839 00:36:04.839 --- 10.0.0.2 ping statistics --- 00:36:04.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.839 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:04.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:04.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:36:04.839 00:36:04.839 --- 10.0.0.1 ping statistics --- 00:36:04.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.839 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2934893 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2934893 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2934893 ']' 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:04.839 06:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:04.839 [2024-11-20 06:47:23.917773] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:04.839 [2024-11-20 06:47:23.918965] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:04.839 [2024-11-20 06:47:23.919014] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.839 [2024-11-20 06:47:24.019061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:04.839 [2024-11-20 06:47:24.072811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.839 [2024-11-20 06:47:24.072862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.839 [2024-11-20 06:47:24.072870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.839 [2024-11-20 06:47:24.072878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.839 [2024-11-20 06:47:24.072884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.839 [2024-11-20 06:47:24.074965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:04.839 [2024-11-20 06:47:24.075205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.839 [2024-11-20 06:47:24.075496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:04.839 [2024-11-20 06:47:24.075499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.839 [2024-11-20 06:47:24.155904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:04.839 [2024-11-20 06:47:24.157097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:04.839 [2024-11-20 06:47:24.157272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:04.839 [2024-11-20 06:47:24.157694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:04.839 [2024-11-20 06:47:24.157765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:04.839 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:04.839 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:36:04.839 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:04.839 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:04.839 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:05.100 [2024-11-20 06:47:24.784524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:05.100 Malloc0 00:36:05.100 [2024-11-20 06:47:24.876866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2935265 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2935265 /var/tmp/bdevperf.sock 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2935265 ']' 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:05.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:05.100 { 00:36:05.100 "params": { 00:36:05.100 "name": "Nvme$subsystem", 00:36:05.100 "trtype": "$TEST_TRANSPORT", 00:36:05.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:05.100 "adrfam": "ipv4", 00:36:05.100 "trsvcid": "$NVMF_PORT", 00:36:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:05.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:05.100 "hdgst": ${hdgst:-false}, 00:36:05.100 "ddgst": ${ddgst:-false} 00:36:05.100 }, 00:36:05.100 "method": "bdev_nvme_attach_controller" 00:36:05.100 } 00:36:05.100 EOF 00:36:05.100 )") 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:05.100 06:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:05.100 "params": { 00:36:05.100 "name": "Nvme0", 00:36:05.100 "trtype": "tcp", 00:36:05.100 "traddr": "10.0.0.2", 00:36:05.100 "adrfam": "ipv4", 00:36:05.100 "trsvcid": "4420", 00:36:05.100 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.100 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.100 "hdgst": false, 00:36:05.100 "ddgst": false 00:36:05.100 }, 00:36:05.100 "method": "bdev_nvme_attach_controller" 00:36:05.100 }' 00:36:05.100 [2024-11-20 06:47:24.986667] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:05.100 [2024-11-20 06:47:24.986742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2935265 ] 00:36:05.362 [2024-11-20 06:47:25.083779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.362 [2024-11-20 06:47:25.137395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.622 Running I/O for 10 seconds... 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=675 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 675 -ge 100 ']' 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.197 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:06.197 [2024-11-20 06:47:25.892626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.197 [2024-11-20 06:47:25.892685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.197 [2024-11-20 06:47:25.892697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.197 [2024-11-20 06:47:25.892705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.197 [2024-11-20 06:47:25.892713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.198 [2024-11-20 06:47:25.892731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.892740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:06.198 [2024-11-20 06:47:25.892755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.892763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69280 is same with the state(6) to be set 00:36:06.198 [2024-11-20 06:47:25.894027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.198 [2024-11-20 06:47:25.894680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.198 [2024-11-20 06:47:25.894689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.894987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.894997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.895157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.199 [2024-11-20 06:47:25.895164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.199 [2024-11-20 06:47:25.896444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:06.199 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.199 06:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:06.199 task offset: 98304 on job bdev=Nvme0n1 fails 00:36:06.199 00:36:06.199 Latency(us) 00:36:06.199 [2024-11-20T05:47:26.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.199 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:06.199 Job: Nvme0n1 ended in about 0.55 seconds with error 00:36:06.199 Verification LBA range: start 0x0 length 0x400 00:36:06.199 Nvme0n1 : 0.55 1397.34 87.33 116.44 0.00 41232.43 1679.36 37355.52 00:36:06.199 [2024-11-20T05:47:26.119Z] =================================================================================================================== 00:36:06.199 [2024-11-20T05:47:26.119Z] Total : 1397.34 87.33 116.44 0.00 41232.43 1679.36 37355.52 00:36:06.199 [2024-11-20 06:47:25.898658] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:06.199 [2024-11-20 06:47:25.898693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d69280 (9): Bad file descriptor 00:36:06.199 [2024-11-20 06:47:25.946416] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2935265 00:36:07.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2935265) - No such process 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:07.182 { 00:36:07.182 "params": { 00:36:07.182 "name": "Nvme$subsystem", 00:36:07.182 "trtype": "$TEST_TRANSPORT", 00:36:07.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.182 "adrfam": "ipv4", 00:36:07.182 "trsvcid": "$NVMF_PORT", 00:36:07.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.182 "hdgst": ${hdgst:-false}, 00:36:07.182 "ddgst": ${ddgst:-false} 00:36:07.182 }, 00:36:07.182 "method": "bdev_nvme_attach_controller" 00:36:07.182 } 00:36:07.182 EOF 00:36:07.182 )") 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:07.182 06:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:07.182 "params": { 00:36:07.182 "name": "Nvme0", 00:36:07.182 "trtype": "tcp", 00:36:07.182 "traddr": "10.0.0.2", 00:36:07.182 "adrfam": "ipv4", 00:36:07.182 "trsvcid": "4420", 00:36:07.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:07.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:07.182 "hdgst": false, 00:36:07.182 "ddgst": false 00:36:07.182 }, 00:36:07.182 "method": "bdev_nvme_attach_controller" 00:36:07.182 }' 00:36:07.182 [2024-11-20 06:47:26.962133] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:07.182 [2024-11-20 06:47:26.962212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2935612 ] 00:36:07.182 [2024-11-20 06:47:27.060120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.442 [2024-11-20 06:47:27.103884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.702 Running I/O for 1 seconds... 00:36:08.643 1607.00 IOPS, 100.44 MiB/s 00:36:08.643 Latency(us) 00:36:08.643 [2024-11-20T05:47:28.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.643 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:08.643 Verification LBA range: start 0x0 length 0x400 00:36:08.643 Nvme0n1 : 1.01 1658.68 103.67 0.00 0.00 37888.83 1419.95 33860.27 00:36:08.643 [2024-11-20T05:47:28.563Z] =================================================================================================================== 00:36:08.643 [2024-11-20T05:47:28.563Z] Total : 1658.68 103.67 0.00 0.00 37888.83 1419.95 33860.27 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:08.904 rmmod nvme_tcp 00:36:08.904 rmmod nvme_fabrics 00:36:08.904 rmmod nvme_keyring 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2934893 ']' 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2934893 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2934893 ']' 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2934893 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2934893 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2934893' 00:36:08.904 killing process with pid 2934893 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2934893 00:36:08.904 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2934893 00:36:09.165 [2024-11-20 06:47:28.826832] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.165 06:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:11.078 00:36:11.078 real 0m14.854s 00:36:11.078 user 0m19.982s 00:36:11.078 sys 0m7.632s 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:11.078 ************************************ 00:36:11.078 END TEST nvmf_host_management 00:36:11.078 ************************************ 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:11.078 06:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:11.340 ************************************ 00:36:11.340 START TEST nvmf_lvol 00:36:11.340 ************************************ 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:11.340 * Looking for test storage... 00:36:11.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.340 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:11.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.341 --rc genhtml_branch_coverage=1 00:36:11.341 --rc genhtml_function_coverage=1 00:36:11.341 --rc genhtml_legend=1 00:36:11.341 --rc geninfo_all_blocks=1 00:36:11.341 --rc geninfo_unexecuted_blocks=1 00:36:11.341 00:36:11.341 ' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:11.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.341 --rc genhtml_branch_coverage=1 00:36:11.341 --rc genhtml_function_coverage=1 00:36:11.341 --rc genhtml_legend=1 00:36:11.341 --rc geninfo_all_blocks=1 00:36:11.341 --rc geninfo_unexecuted_blocks=1 00:36:11.341 00:36:11.341 ' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:11.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.341 --rc genhtml_branch_coverage=1 00:36:11.341 --rc genhtml_function_coverage=1 00:36:11.341 --rc genhtml_legend=1 00:36:11.341 --rc geninfo_all_blocks=1 00:36:11.341 --rc geninfo_unexecuted_blocks=1 00:36:11.341 00:36:11.341 ' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:11.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.341 --rc genhtml_branch_coverage=1 00:36:11.341 --rc genhtml_function_coverage=1 00:36:11.341 --rc genhtml_legend=1 00:36:11.341 --rc geninfo_all_blocks=1 00:36:11.341 --rc geninfo_unexecuted_blocks=1 00:36:11.341 00:36:11.341 ' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.341 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.603 06:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:19.748 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.748 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:36:19.748 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:19.748 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:19.748 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:19.748 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:19.749 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:19.749 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:19.749 Found net devices under 0000:31:00.0: cvl_0_0 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:19.749 Found net devices under 0000:31:00.1: cvl_0_1 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:19.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:36:19.749 00:36:19.749 --- 10.0.0.2 ping statistics --- 00:36:19.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.749 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:36:19.749 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:36:19.749 00:36:19.749 --- 10.0.0.1 ping statistics --- 00:36:19.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.750 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2940047 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2940047 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2940047 ']' 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:19.750 06:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:19.750 [2024-11-20 06:47:38.832134] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:19.750 [2024-11-20 06:47:38.833303] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:19.750 [2024-11-20 06:47:38.833357] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.750 [2024-11-20 06:47:38.935933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:19.750 [2024-11-20 06:47:38.988091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.750 [2024-11-20 06:47:38.988144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.750 [2024-11-20 06:47:38.988154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.750 [2024-11-20 06:47:38.988161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.750 [2024-11-20 06:47:38.988167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.750 [2024-11-20 06:47:38.990302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.750 [2024-11-20 06:47:38.990450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.750 [2024-11-20 06:47:38.990450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.750 [2024-11-20 06:47:39.069169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:19.750 [2024-11-20 06:47:39.070120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:19.750 [2024-11-20 06:47:39.070449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:19.750 [2024-11-20 06:47:39.070643] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:19.750 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:19.750 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:36:19.750 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.750 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:19.750 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:20.012 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:20.012 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:20.012 [2024-11-20 06:47:39.867472] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.012 06:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.272 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:36:20.272 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:20.533 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:36:20.533 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:36:20.794 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:36:21.055 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1ae038dc-56cc-4b20-8a7f-b2276d0718ce 00:36:21.055 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1ae038dc-56cc-4b20-8a7f-b2276d0718ce lvol 20 00:36:21.055 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=88202a5d-b0d0-4245-b826-e022da695a01 00:36:21.055 06:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:21.316 06:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 88202a5d-b0d0-4245-b826-e022da695a01 00:36:21.578 06:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:21.578 [2024-11-20 06:47:41.451473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.578 06:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:21.839 06:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2940691 00:36:21.839 06:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:36:21.839 06:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:36:22.781 06:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 88202a5d-b0d0-4245-b826-e022da695a01 MY_SNAPSHOT 00:36:23.043 06:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e900a011-63fa-478d-8198-d5d23bbe0724 00:36:23.043 06:47:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 88202a5d-b0d0-4245-b826-e022da695a01 30 00:36:23.303 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e900a011-63fa-478d-8198-d5d23bbe0724 MY_CLONE 00:36:23.564 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=850a4684-b4da-4fe2-b5cb-a749ab25959a 00:36:23.564 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 850a4684-b4da-4fe2-b5cb-a749ab25959a 00:36:24.134 06:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2940691 00:36:32.263 Initializing NVMe Controllers 00:36:32.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:32.263 Controller IO queue size 128, less than required. 00:36:32.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:32.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:36:32.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:36:32.263 Initialization complete. Launching workers. 00:36:32.263 ======================================================== 00:36:32.263 Latency(us) 00:36:32.263 Device Information : IOPS MiB/s Average min max 00:36:32.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15009.50 58.63 8528.45 1912.69 72025.64 00:36:32.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15063.20 58.84 8497.33 4039.77 84262.51 00:36:32.263 ======================================================== 00:36:32.263 Total : 30072.70 117.47 8512.86 1912.69 84262.51 00:36:32.263 00:36:32.263 06:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.263 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 88202a5d-b0d0-4245-b826-e022da695a01 00:36:32.522 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ae038dc-56cc-4b20-8a7f-b2276d0718ce 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.782 rmmod nvme_tcp 00:36:32.782 rmmod nvme_fabrics 00:36:32.782 rmmod nvme_keyring 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2940047 ']' 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2940047 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2940047 ']' 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2940047 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2940047 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2940047' 00:36:32.782 killing process with pid 2940047 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2940047 00:36:32.782 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2940047 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.043 06:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.952 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:35.212 00:36:35.212 real 0m23.855s 00:36:35.212 user 0m55.863s 00:36:35.212 sys 0m10.760s 00:36:35.212 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:35.212 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:35.212 ************************************ 00:36:35.212 END TEST nvmf_lvol 00:36:35.212 ************************************ 00:36:35.212 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:35.212 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:36:35.212 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:35.212 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:35.212 ************************************ 00:36:35.212 START TEST nvmf_lvs_grow 00:36:35.212 ************************************ 00:36:35.212 06:47:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:35.212 * Looking for test storage... 00:36:35.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.212 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:35.212 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:36:35.212 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:35.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.473 --rc genhtml_branch_coverage=1 00:36:35.473 --rc genhtml_function_coverage=1 00:36:35.473 --rc genhtml_legend=1 00:36:35.473 --rc geninfo_all_blocks=1 00:36:35.473 --rc geninfo_unexecuted_blocks=1 00:36:35.473 00:36:35.473 ' 00:36:35.473 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:35.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.474 --rc genhtml_branch_coverage=1 00:36:35.474 --rc genhtml_function_coverage=1 00:36:35.474 --rc genhtml_legend=1 00:36:35.474 --rc geninfo_all_blocks=1 00:36:35.474 --rc geninfo_unexecuted_blocks=1 00:36:35.474 00:36:35.474 ' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:35.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.474 --rc genhtml_branch_coverage=1 00:36:35.474 --rc genhtml_function_coverage=1 00:36:35.474 --rc genhtml_legend=1 00:36:35.474 --rc geninfo_all_blocks=1 00:36:35.474 --rc geninfo_unexecuted_blocks=1 00:36:35.474 00:36:35.474 ' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:35.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:35.474 --rc genhtml_branch_coverage=1 00:36:35.474 --rc genhtml_function_coverage=1 00:36:35.474 --rc genhtml_legend=1 00:36:35.474 --rc geninfo_all_blocks=1 00:36:35.474 --rc geninfo_unexecuted_blocks=1 00:36:35.474 00:36:35.474 ' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:35.474 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.475 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:35.475 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:35.475 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:36:35.475 06:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:43.610 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:43.610 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:43.610 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:43.611 Found net devices under 0000:31:00.0: cvl_0_0 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:43.611 Found net devices under 0000:31:00.1: cvl_0_1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:43.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:43.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:36:43.611 00:36:43.611 --- 10.0.0.2 ping statistics --- 00:36:43.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.611 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:43.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:43.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:36:43.611 00:36:43.611 --- 10.0.0.1 ping statistics --- 00:36:43.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.611 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2947093 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2947093 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2947093 ']' 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:43.611 06:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:43.611 [2024-11-20 06:48:02.822743] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:43.611 [2024-11-20 06:48:02.823890] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:43.611 [2024-11-20 06:48:02.823946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.611 [2024-11-20 06:48:02.925805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.611 [2024-11-20 06:48:02.977086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.611 [2024-11-20 06:48:02.977141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.611 [2024-11-20 06:48:02.977150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.611 [2024-11-20 06:48:02.977157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.611 [2024-11-20 06:48:02.977164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.611 [2024-11-20 06:48:02.977956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.611 [2024-11-20 06:48:03.055494] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:43.611 [2024-11-20 06:48:03.055792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:43.872 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:43.872 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:36:43.872 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:43.872 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:43.872 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:43.872 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.872 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:44.169 [2024-11-20 06:48:03.878864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:44.169 ************************************ 00:36:44.169 START TEST lvs_grow_clean 00:36:44.169 ************************************ 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:44.169 06:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:44.432 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:36:44.432 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:36:44.691 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:44.692 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:44.692 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:36:44.692 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:36:44.692 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:36:44.692 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d333a9a7-1498-4ef8-b9ee-b571472fc62e lvol 150 00:36:44.951 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a 00:36:44.951 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:44.951 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:36:45.211 [2024-11-20 06:48:04.894507] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:36:45.211 [2024-11-20 06:48:04.894668] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:36:45.211 true 00:36:45.211 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:45.211 06:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:36:45.211 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:36:45.211 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:45.471 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a 00:36:45.731 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.731 [2024-11-20 06:48:05.595180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.731 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2947563 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2947563 /var/tmp/bdevperf.sock 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2947563 ']' 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:45.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:45.991 06:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:45.991 [2024-11-20 06:48:05.829713] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:45.991 [2024-11-20 06:48:05.829815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947563 ] 00:36:46.252 [2024-11-20 06:48:05.927468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.252 [2024-11-20 06:48:05.980145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.822 06:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:46.822 06:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:36:46.822 06:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:36:47.082 Nvme0n1 00:36:47.082 06:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:36:47.342 [ 00:36:47.342 { 00:36:47.342 "name": "Nvme0n1", 00:36:47.342 "aliases": [ 00:36:47.342 "fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a" 00:36:47.342 ], 00:36:47.342 "product_name": "NVMe disk", 00:36:47.342 "block_size": 4096, 00:36:47.342 "num_blocks": 38912, 00:36:47.342 "uuid": "fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a", 00:36:47.342 "numa_id": 0, 00:36:47.342 "assigned_rate_limits": { 00:36:47.342 "rw_ios_per_sec": 0, 00:36:47.342 "rw_mbytes_per_sec": 0, 00:36:47.342 "r_mbytes_per_sec": 0, 00:36:47.342 "w_mbytes_per_sec": 0 00:36:47.342 }, 00:36:47.342 "claimed": false, 00:36:47.342 "zoned": false, 00:36:47.342 "supported_io_types": { 00:36:47.342 "read": true, 00:36:47.343 "write": true, 00:36:47.343 "unmap": true, 00:36:47.343 "flush": true, 00:36:47.343 "reset": true, 00:36:47.343 "nvme_admin": true, 00:36:47.343 "nvme_io": true, 00:36:47.343 "nvme_io_md": false, 00:36:47.343 "write_zeroes": true, 00:36:47.343 "zcopy": false, 00:36:47.343 "get_zone_info": false, 00:36:47.343 "zone_management": false, 00:36:47.343 "zone_append": false, 00:36:47.343 "compare": true, 00:36:47.343 "compare_and_write": true, 00:36:47.343 "abort": true, 00:36:47.343 "seek_hole": false, 00:36:47.343 "seek_data": false, 00:36:47.343 "copy": true, 00:36:47.343 "nvme_iov_md": false 00:36:47.343 }, 00:36:47.343 "memory_domains": [ 00:36:47.343 { 00:36:47.343 "dma_device_id": "system", 00:36:47.343 "dma_device_type": 1 00:36:47.343 } 00:36:47.343 ], 00:36:47.343 "driver_specific": { 00:36:47.343 "nvme": [ 00:36:47.343 { 00:36:47.343 "trid": { 00:36:47.343 "trtype": "TCP", 00:36:47.343 "adrfam": "IPv4", 00:36:47.343 "traddr": "10.0.0.2", 00:36:47.343 "trsvcid": "4420", 00:36:47.343 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:36:47.343 }, 00:36:47.343 "ctrlr_data": { 00:36:47.343 "cntlid": 1, 00:36:47.343 "vendor_id": "0x8086", 00:36:47.343 "model_number": "SPDK bdev Controller", 00:36:47.343 "serial_number": "SPDK0", 00:36:47.343 "firmware_revision": "25.01", 00:36:47.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.343 "oacs": { 00:36:47.343 "security": 0, 00:36:47.343 "format": 0, 00:36:47.343 "firmware": 0, 00:36:47.343 "ns_manage": 0 00:36:47.343 }, 00:36:47.343 "multi_ctrlr": true, 00:36:47.343 "ana_reporting": false 00:36:47.343 }, 00:36:47.343 "vs": { 00:36:47.343 "nvme_version": "1.3" 00:36:47.343 }, 00:36:47.343 "ns_data": { 00:36:47.343 "id": 1, 00:36:47.343 "can_share": true 00:36:47.343 } 00:36:47.343 } 00:36:47.343 ], 00:36:47.343 "mp_policy": "active_passive" 00:36:47.343 } 00:36:47.343 } 00:36:47.343 ] 00:36:47.343 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2947886 00:36:47.343 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:36:47.343 06:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:47.343 Running I/O for 10 seconds... 00:36:48.725 Latency(us) 00:36:48.725 [2024-11-20T05:48:08.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:48.725 Nvme0n1 : 1.00 16701.00 65.24 0.00 0.00 0.00 0.00 0.00 00:36:48.725 [2024-11-20T05:48:08.645Z] =================================================================================================================== 00:36:48.725 [2024-11-20T05:48:08.645Z] Total : 16701.00 65.24 0.00 0.00 0.00 0.00 0.00 00:36:48.725 00:36:49.296 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:49.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:49.556 Nvme0n1 : 2.00 16986.50 66.35 0.00 0.00 0.00 0.00 0.00 00:36:49.556 [2024-11-20T05:48:09.476Z] =================================================================================================================== 00:36:49.556 [2024-11-20T05:48:09.476Z] Total : 16986.50 66.35 0.00 0.00 0.00 0.00 0.00 00:36:49.556 00:36:49.556 true 00:36:49.556 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:49.556 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:36:49.816 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:36:49.816 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:36:49.816 06:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2947886 00:36:50.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:50.384 Nvme0n1 : 3.00 17172.00 67.08 0.00 0.00 0.00 0.00 0.00 00:36:50.384 [2024-11-20T05:48:10.304Z] =================================================================================================================== 00:36:50.384 [2024-11-20T05:48:10.304Z] Total : 17172.00 67.08 0.00 0.00 0.00 0.00 0.00 00:36:50.384 00:36:51.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:51.765 Nvme0n1 : 4.00 17863.75 69.78 0.00 0.00 0.00 0.00 0.00 00:36:51.765 [2024-11-20T05:48:11.685Z] =================================================================================================================== 00:36:51.765 [2024-11-20T05:48:11.685Z] Total : 17863.75 69.78 0.00 0.00 0.00 0.00 0.00 00:36:51.765 00:36:52.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:52.336 Nvme0n1 : 5.00 19358.40 75.62 0.00 0.00 0.00 0.00 0.00 00:36:52.336 [2024-11-20T05:48:12.256Z] =================================================================================================================== 00:36:52.336 [2024-11-20T05:48:12.256Z] Total : 19358.40 75.62 0.00 0.00 0.00 0.00 0.00 00:36:52.336 00:36:53.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:53.717 Nvme0n1 : 6.00 20363.00 79.54 0.00 0.00 0.00 0.00 0.00 00:36:53.717 [2024-11-20T05:48:13.637Z] =================================================================================================================== 00:36:53.717 [2024-11-20T05:48:13.637Z] Total : 20363.00 79.54 0.00 0.00 0.00 0.00 0.00 00:36:53.717 00:36:54.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:54.657 Nvme0n1 : 7.00 21082.57 82.35 0.00 0.00 0.00 0.00 0.00 00:36:54.657 [2024-11-20T05:48:14.577Z] =================================================================================================================== 00:36:54.657 [2024-11-20T05:48:14.577Z] Total : 21082.57 82.35 0.00 0.00 0.00 0.00 0.00 00:36:54.657 00:36:55.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:55.598 Nvme0n1 : 8.00 21622.38 84.46 0.00 0.00 0.00 0.00 0.00 00:36:55.598 [2024-11-20T05:48:15.518Z] =================================================================================================================== 00:36:55.598 [2024-11-20T05:48:15.518Z] Total : 21622.38 84.46 0.00 0.00 0.00 0.00 0.00 00:36:55.598 00:36:56.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:56.538 Nvme0n1 : 9.00 22049.44 86.13 0.00 0.00 0.00 0.00 0.00 00:36:56.538 [2024-11-20T05:48:16.458Z] =================================================================================================================== 00:36:56.538 [2024-11-20T05:48:16.458Z] Total : 22049.44 86.13 0.00 0.00 0.00 0.00 0.00 00:36:56.538 00:36:57.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:57.476 Nvme0n1 : 10.00 22389.50 87.46 0.00 0.00 0.00 0.00 0.00 00:36:57.477 [2024-11-20T05:48:17.397Z] =================================================================================================================== 00:36:57.477 [2024-11-20T05:48:17.397Z] Total : 22389.50 87.46 0.00 0.00 0.00 0.00 0.00 00:36:57.477 00:36:57.477 00:36:57.477 Latency(us) 00:36:57.477 [2024-11-20T05:48:17.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:57.477 Nvme0n1 : 10.00 22391.29 87.47 0.00 0.00 5713.41 2894.51 32768.00 00:36:57.477 [2024-11-20T05:48:17.397Z] =================================================================================================================== 00:36:57.477 [2024-11-20T05:48:17.397Z] Total : 22391.29 87.47 0.00 0.00 5713.41 2894.51 32768.00 00:36:57.477 { 00:36:57.477 "results": [ 00:36:57.477 { 00:36:57.477 "job": "Nvme0n1", 00:36:57.477 "core_mask": "0x2", 00:36:57.477 "workload": "randwrite", 00:36:57.477 "status": "finished", 00:36:57.477 "queue_depth": 128, 00:36:57.477 "io_size": 4096, 00:36:57.477 "runtime": 10.004919, 00:36:57.477 "iops": 22391.28572655111, 00:36:57.477 "mibps": 87.46595986934027, 00:36:57.477 "io_failed": 0, 00:36:57.477 "io_timeout": 0, 00:36:57.477 "avg_latency_us": 5713.412471159955, 00:36:57.477 "min_latency_us": 2894.5066666666667, 00:36:57.477 "max_latency_us": 32768.0 00:36:57.477 } 00:36:57.477 ], 00:36:57.477 "core_count": 1 00:36:57.477 } 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2947563 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2947563 ']' 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2947563 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2947563 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2947563' 00:36:57.477 killing process with pid 2947563 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2947563 00:36:57.477 Received shutdown signal, test time was about 10.000000 seconds 00:36:57.477 00:36:57.477 Latency(us) 00:36:57.477 [2024-11-20T05:48:17.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.477 [2024-11-20T05:48:17.397Z] =================================================================================================================== 00:36:57.477 [2024-11-20T05:48:17.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.477 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2947563 00:36:57.736 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:57.736 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:57.995 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:57.996 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:58.255 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:58.255 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:36:58.255 06:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:58.255 [2024-11-20 06:48:18.110556] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:36:58.255 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:58.515 request: 00:36:58.515 { 00:36:58.515 "uuid": "d333a9a7-1498-4ef8-b9ee-b571472fc62e", 00:36:58.515 "method": "bdev_lvol_get_lvstores", 00:36:58.515 "req_id": 1 00:36:58.515 } 00:36:58.515 Got JSON-RPC error response 00:36:58.515 response: 00:36:58.515 { 00:36:58.515 "code": -19, 00:36:58.515 "message": "No such device" 00:36:58.515 } 00:36:58.515 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:36:58.515 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:58.515 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:58.515 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:58.515 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:58.775 aio_bdev 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:58.775 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a -t 2000 00:36:59.035 [ 00:36:59.035 { 00:36:59.035 "name": "fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a", 00:36:59.035 "aliases": [ 00:36:59.035 "lvs/lvol" 00:36:59.035 ], 00:36:59.035 "product_name": "Logical Volume", 00:36:59.035 "block_size": 4096, 00:36:59.035 "num_blocks": 38912, 00:36:59.035 "uuid": "fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a", 00:36:59.035 "assigned_rate_limits": { 00:36:59.035 "rw_ios_per_sec": 0, 00:36:59.035 "rw_mbytes_per_sec": 0, 00:36:59.035 "r_mbytes_per_sec": 0, 00:36:59.035 "w_mbytes_per_sec": 0 00:36:59.035 }, 00:36:59.035 "claimed": false, 00:36:59.035 "zoned": false, 00:36:59.035 "supported_io_types": { 00:36:59.035 "read": true, 00:36:59.035 "write": true, 00:36:59.035 "unmap": true, 00:36:59.035 "flush": false, 00:36:59.035 "reset": true, 00:36:59.035 "nvme_admin": false, 00:36:59.035 "nvme_io": false, 00:36:59.035 "nvme_io_md": false, 00:36:59.035 "write_zeroes": true, 00:36:59.035 "zcopy": false, 00:36:59.035 "get_zone_info": false, 00:36:59.035 "zone_management": false, 00:36:59.035 "zone_append": false, 00:36:59.035 "compare": false, 00:36:59.035 "compare_and_write": false, 00:36:59.035 "abort": false, 00:36:59.035 "seek_hole": true, 00:36:59.035 "seek_data": true, 00:36:59.035 "copy": false, 00:36:59.035 "nvme_iov_md": false 00:36:59.035 }, 00:36:59.035 "driver_specific": { 00:36:59.035 "lvol": { 00:36:59.035 "lvol_store_uuid": "d333a9a7-1498-4ef8-b9ee-b571472fc62e", 00:36:59.035 "base_bdev": "aio_bdev", 00:36:59.035 "thin_provision": false, 00:36:59.035 "num_allocated_clusters": 38, 00:36:59.035 "snapshot": false, 00:36:59.035 "clone": false, 00:36:59.035 "esnap_clone": false 00:36:59.035 } 00:36:59.035 } 00:36:59.035 } 00:36:59.035 ] 00:36:59.035 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:36:59.035 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:59.035 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:59.295 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:59.295 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:59.295 06:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:59.295 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:59.295 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fc09fe1d-4c7c-4663-b8ce-8c3066a4be0a 00:36:59.554 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d333a9a7-1498-4ef8-b9ee-b571472fc62e 00:36:59.813 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:59.813 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:36:59.813 00:36:59.813 real 0m15.748s 00:36:59.813 user 0m15.540s 00:36:59.813 sys 0m1.377s 00:36:59.813 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:59.813 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:36:59.813 ************************************ 00:36:59.813 END TEST lvs_grow_clean 00:36:59.813 ************************************ 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:00.074 ************************************ 00:37:00.074 START TEST lvs_grow_dirty 00:37:00.074 ************************************ 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:00.074 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:00.334 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:00.334 06:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:00.334 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:00.334 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:00.334 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:00.593 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:00.593 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:00.594 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 lvol 150 00:37:00.854 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6f3f3c98-7840-4bf7-ad2c-57effdc21d52 00:37:00.854 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:00.854 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:00.854 [2024-11-20 06:48:20.678492] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:00.854 [2024-11-20 06:48:20.678645] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:00.854 true 00:37:00.854 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:00.854 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:01.114 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:01.114 06:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:01.373 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6f3f3c98-7840-4bf7-ad2c-57effdc21d52 00:37:01.373 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:01.633 [2024-11-20 06:48:21.343005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2951082 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2951082 /var/tmp/bdevperf.sock 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2951082 ']' 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:01.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:01.633 06:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:01.893 [2024-11-20 06:48:21.581293] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:01.893 [2024-11-20 06:48:21.581377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951082 ] 00:37:01.893 [2024-11-20 06:48:21.667923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.893 [2024-11-20 06:48:21.697864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.462 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:02.462 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:37:02.462 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:02.723 Nvme0n1 00:37:02.723 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:02.983 [ 00:37:02.983 { 00:37:02.983 "name": "Nvme0n1", 00:37:02.983 "aliases": [ 00:37:02.983 "6f3f3c98-7840-4bf7-ad2c-57effdc21d52" 00:37:02.983 ], 00:37:02.983 "product_name": "NVMe disk", 00:37:02.983 "block_size": 4096, 00:37:02.983 "num_blocks": 38912, 00:37:02.983 "uuid": "6f3f3c98-7840-4bf7-ad2c-57effdc21d52", 00:37:02.983 "numa_id": 0, 00:37:02.983 "assigned_rate_limits": { 00:37:02.983 "rw_ios_per_sec": 0, 00:37:02.983 "rw_mbytes_per_sec": 0, 00:37:02.983 "r_mbytes_per_sec": 0, 00:37:02.983 "w_mbytes_per_sec": 0 00:37:02.983 }, 00:37:02.983 "claimed": false, 00:37:02.983 "zoned": false, 00:37:02.983 "supported_io_types": { 00:37:02.983 "read": true, 00:37:02.983 "write": true, 00:37:02.983 "unmap": true, 00:37:02.983 "flush": true, 00:37:02.983 "reset": true, 00:37:02.983 "nvme_admin": true, 00:37:02.983 "nvme_io": true, 00:37:02.983 "nvme_io_md": false, 00:37:02.983 "write_zeroes": true, 00:37:02.983 "zcopy": false, 00:37:02.983 "get_zone_info": false, 00:37:02.983 "zone_management": false, 00:37:02.983 "zone_append": false, 00:37:02.983 "compare": true, 00:37:02.983 "compare_and_write": true, 00:37:02.983 "abort": true, 00:37:02.983 "seek_hole": false, 00:37:02.983 "seek_data": false, 00:37:02.983 "copy": true, 00:37:02.983 "nvme_iov_md": false 00:37:02.983 }, 00:37:02.983 "memory_domains": [ 00:37:02.983 { 00:37:02.983 "dma_device_id": "system", 00:37:02.983 "dma_device_type": 1 00:37:02.983 } 00:37:02.983 ], 00:37:02.983 "driver_specific": { 00:37:02.983 "nvme": [ 00:37:02.983 { 00:37:02.983 "trid": { 00:37:02.983 "trtype": "TCP", 00:37:02.983 "adrfam": "IPv4", 00:37:02.983 "traddr": "10.0.0.2", 00:37:02.983 "trsvcid": "4420", 00:37:02.983 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:02.983 }, 00:37:02.983 "ctrlr_data": { 00:37:02.983 "cntlid": 1, 00:37:02.983 "vendor_id": "0x8086", 00:37:02.983 "model_number": "SPDK bdev Controller", 00:37:02.983 "serial_number": "SPDK0", 00:37:02.983 "firmware_revision": "25.01", 00:37:02.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.983 "oacs": { 00:37:02.983 "security": 0, 00:37:02.983 "format": 0, 00:37:02.983 "firmware": 0, 00:37:02.983 "ns_manage": 0 00:37:02.983 }, 00:37:02.983 "multi_ctrlr": true, 00:37:02.983 "ana_reporting": false 00:37:02.983 }, 00:37:02.983 "vs": { 00:37:02.983 "nvme_version": "1.3" 00:37:02.983 }, 00:37:02.983 "ns_data": { 00:37:02.983 "id": 1, 00:37:02.983 "can_share": true 00:37:02.983 } 00:37:02.983 } 00:37:02.983 ], 00:37:02.983 "mp_policy": "active_passive" 00:37:02.983 } 00:37:02.983 } 00:37:02.983 ] 00:37:02.983 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2951424 00:37:02.983 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:02.983 06:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:02.983 Running I/O for 10 seconds... 00:37:04.365 Latency(us) 00:37:04.365 [2024-11-20T05:48:24.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:04.365 Nvme0n1 : 1.00 17083.00 66.73 0.00 0.00 0.00 0.00 0.00 00:37:04.365 [2024-11-20T05:48:24.285Z] =================================================================================================================== 00:37:04.365 [2024-11-20T05:48:24.285Z] Total : 17083.00 66.73 0.00 0.00 0.00 0.00 0.00 00:37:04.365 00:37:04.934 06:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:04.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:04.934 Nvme0n1 : 2.00 17293.50 67.55 0.00 0.00 0.00 0.00 0.00 00:37:04.934 [2024-11-20T05:48:24.854Z] =================================================================================================================== 00:37:04.934 [2024-11-20T05:48:24.854Z] Total : 17293.50 67.55 0.00 0.00 0.00 0.00 0.00 00:37:04.934 00:37:05.194 true 00:37:05.194 06:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:05.194 06:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:05.194 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:05.194 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:05.195 06:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2951424 00:37:06.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:06.135 Nvme0n1 : 3.00 17374.33 67.87 0.00 0.00 0.00 0.00 0.00 00:37:06.135 [2024-11-20T05:48:26.055Z] =================================================================================================================== 00:37:06.135 [2024-11-20T05:48:26.055Z] Total : 17374.33 67.87 0.00 0.00 0.00 0.00 0.00 00:37:06.135 00:37:07.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:07.076 Nvme0n1 : 4.00 17426.75 68.07 0.00 0.00 0.00 0.00 0.00 00:37:07.076 [2024-11-20T05:48:26.996Z] =================================================================================================================== 00:37:07.076 [2024-11-20T05:48:26.996Z] Total : 17426.75 68.07 0.00 0.00 0.00 0.00 0.00 00:37:07.076 00:37:08.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:08.016 Nvme0n1 : 5.00 17791.00 69.50 0.00 0.00 0.00 0.00 0.00 00:37:08.016 [2024-11-20T05:48:27.936Z] =================================================================================================================== 00:37:08.016 [2024-11-20T05:48:27.936Z] Total : 17791.00 69.50 0.00 0.00 0.00 0.00 0.00 00:37:08.016 00:37:08.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:08.955 Nvme0n1 : 6.00 18959.17 74.06 0.00 0.00 0.00 0.00 0.00 00:37:08.955 [2024-11-20T05:48:28.875Z] =================================================================================================================== 00:37:08.955 [2024-11-20T05:48:28.875Z] Total : 18959.17 74.06 0.00 0.00 0.00 0.00 0.00 00:37:08.955 00:37:10.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:10.336 Nvme0n1 : 7.00 19793.57 77.32 0.00 0.00 0.00 0.00 0.00 00:37:10.336 [2024-11-20T05:48:30.256Z] =================================================================================================================== 00:37:10.336 [2024-11-20T05:48:30.256Z] Total : 19793.57 77.32 0.00 0.00 0.00 0.00 0.00 00:37:10.336 00:37:11.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:11.276 Nvme0n1 : 8.00 20409.38 79.72 0.00 0.00 0.00 0.00 0.00 00:37:11.276 [2024-11-20T05:48:31.196Z] =================================================================================================================== 00:37:11.276 [2024-11-20T05:48:31.196Z] Total : 20409.38 79.72 0.00 0.00 0.00 0.00 0.00 00:37:11.276 00:37:12.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:12.218 Nvme0n1 : 9.00 20899.00 81.64 0.00 0.00 0.00 0.00 0.00 00:37:12.218 [2024-11-20T05:48:32.138Z] =================================================================================================================== 00:37:12.218 [2024-11-20T05:48:32.138Z] Total : 20899.00 81.64 0.00 0.00 0.00 0.00 0.00 00:37:12.218 00:37:13.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:13.157 Nvme0n1 : 10.00 21295.50 83.19 0.00 0.00 0.00 0.00 0.00 00:37:13.157 [2024-11-20T05:48:33.077Z] =================================================================================================================== 00:37:13.157 [2024-11-20T05:48:33.077Z] Total : 21295.50 83.19 0.00 0.00 0.00 0.00 0.00 00:37:13.157 00:37:13.157 00:37:13.157 Latency(us) 00:37:13.157 [2024-11-20T05:48:33.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:13.157 Nvme0n1 : 10.00 21297.00 83.19 0.00 0.00 6006.62 3495.25 22937.60 00:37:13.157 [2024-11-20T05:48:33.077Z] =================================================================================================================== 00:37:13.157 [2024-11-20T05:48:33.077Z] Total : 21297.00 83.19 0.00 0.00 6006.62 3495.25 22937.60 00:37:13.157 { 00:37:13.157 "results": [ 00:37:13.157 { 00:37:13.157 "job": "Nvme0n1", 00:37:13.157 "core_mask": "0x2", 00:37:13.157 "workload": "randwrite", 00:37:13.157 "status": "finished", 00:37:13.157 "queue_depth": 128, 00:37:13.157 "io_size": 4096, 00:37:13.157 "runtime": 10.004555, 00:37:13.157 "iops": 21296.999216856722, 00:37:13.157 "mibps": 83.19140319084657, 00:37:13.157 "io_failed": 0, 00:37:13.157 "io_timeout": 0, 00:37:13.157 "avg_latency_us": 6006.618055228324, 00:37:13.157 "min_latency_us": 3495.2533333333336, 00:37:13.157 "max_latency_us": 22937.6 00:37:13.157 } 00:37:13.157 ], 00:37:13.157 "core_count": 1 00:37:13.157 } 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2951082 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2951082 ']' 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2951082 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2951082 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2951082' 00:37:13.157 killing process with pid 2951082 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2951082 00:37:13.157 Received shutdown signal, test time was about 10.000000 seconds 00:37:13.157 00:37:13.157 Latency(us) 00:37:13.157 [2024-11-20T05:48:33.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.157 [2024-11-20T05:48:33.077Z] =================================================================================================================== 00:37:13.157 [2024-11-20T05:48:33.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:13.157 06:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2951082 00:37:13.157 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:13.416 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:13.675 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:13.675 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:13.675 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:13.675 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:13.675 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2947093 00:37:13.676 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2947093 00:37:13.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2947093 Killed "${NVMF_APP[@]}" "$@" 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2953437 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2953437 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2953437 ']' 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:13.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:13.934 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:13.935 06:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:13.935 [2024-11-20 06:48:33.694074] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:13.935 [2024-11-20 06:48:33.695132] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:13.935 [2024-11-20 06:48:33.695180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:13.935 [2024-11-20 06:48:33.786735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.935 [2024-11-20 06:48:33.817675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:13.935 [2024-11-20 06:48:33.817704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:13.935 [2024-11-20 06:48:33.817710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:13.935 [2024-11-20 06:48:33.817715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:13.935 [2024-11-20 06:48:33.817719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:13.935 [2024-11-20 06:48:33.818228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.194 [2024-11-20 06:48:33.870115] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.194 [2024-11-20 06:48:33.870301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.763 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:14.763 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:37:14.763 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:14.763 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:14.763 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:14.763 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.763 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:15.022 [2024-11-20 06:48:34.700638] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:15.022 [2024-11-20 06:48:34.700902] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:15.022 [2024-11-20 06:48:34.700995] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6f3f3c98-7840-4bf7-ad2c-57effdc21d52 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6f3f3c98-7840-4bf7-ad2c-57effdc21d52 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:15.022 06:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f3f3c98-7840-4bf7-ad2c-57effdc21d52 -t 2000 00:37:15.282 [ 00:37:15.282 { 00:37:15.282 "name": "6f3f3c98-7840-4bf7-ad2c-57effdc21d52", 00:37:15.282 "aliases": [ 00:37:15.282 "lvs/lvol" 00:37:15.282 ], 00:37:15.282 "product_name": "Logical Volume", 00:37:15.282 "block_size": 4096, 00:37:15.282 "num_blocks": 38912, 00:37:15.282 "uuid": "6f3f3c98-7840-4bf7-ad2c-57effdc21d52", 00:37:15.282 "assigned_rate_limits": { 00:37:15.282 "rw_ios_per_sec": 0, 00:37:15.282 "rw_mbytes_per_sec": 0, 00:37:15.282 "r_mbytes_per_sec": 0, 00:37:15.282 "w_mbytes_per_sec": 0 00:37:15.282 }, 00:37:15.282 "claimed": false, 00:37:15.282 "zoned": false, 00:37:15.282 "supported_io_types": { 00:37:15.282 "read": true, 00:37:15.282 "write": true, 00:37:15.282 "unmap": true, 00:37:15.282 "flush": false, 00:37:15.282 "reset": true, 00:37:15.282 "nvme_admin": false, 00:37:15.282 "nvme_io": false, 00:37:15.282 "nvme_io_md": false, 00:37:15.282 "write_zeroes": true, 00:37:15.282 "zcopy": false, 00:37:15.282 "get_zone_info": false, 00:37:15.282 "zone_management": false, 00:37:15.282 "zone_append": false, 00:37:15.282 "compare": false, 00:37:15.282 "compare_and_write": false, 00:37:15.282 "abort": false, 00:37:15.282 "seek_hole": true, 00:37:15.282 "seek_data": true, 00:37:15.282 "copy": false, 00:37:15.282 "nvme_iov_md": false 00:37:15.282 }, 00:37:15.282 "driver_specific": { 00:37:15.282 "lvol": { 00:37:15.282 "lvol_store_uuid": "6a7e6d4c-0e71-40db-af9f-55632b0f7f51", 00:37:15.282 "base_bdev": "aio_bdev", 00:37:15.282 "thin_provision": false, 00:37:15.282 "num_allocated_clusters": 38, 00:37:15.282 "snapshot": false, 00:37:15.282 "clone": false, 00:37:15.282 "esnap_clone": false 00:37:15.282 } 00:37:15.282 } 00:37:15.282 } 00:37:15.282 ] 00:37:15.282 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:37:15.282 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:15.282 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:15.542 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:15.542 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:15.542 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:15.542 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:15.542 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:15.803 [2024-11-20 06:48:35.582697] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:15.803 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:15.803 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:37:15.803 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:15.803 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:15.803 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:15.803 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:15.803 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:15.804 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:15.804 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:15.804 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:15.804 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:15.804 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:16.064 request: 00:37:16.064 { 00:37:16.064 "uuid": "6a7e6d4c-0e71-40db-af9f-55632b0f7f51", 00:37:16.064 "method": "bdev_lvol_get_lvstores", 00:37:16.064 "req_id": 1 00:37:16.064 } 00:37:16.064 Got JSON-RPC error response 00:37:16.064 response: 00:37:16.064 { 00:37:16.064 "code": -19, 00:37:16.064 "message": "No such device" 00:37:16.064 } 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:16.064 aio_bdev 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6f3f3c98-7840-4bf7-ad2c-57effdc21d52 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6f3f3c98-7840-4bf7-ad2c-57effdc21d52 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:37:16.064 06:48:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:16.326 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f3f3c98-7840-4bf7-ad2c-57effdc21d52 -t 2000 00:37:16.586 [ 00:37:16.587 { 00:37:16.587 "name": "6f3f3c98-7840-4bf7-ad2c-57effdc21d52", 00:37:16.587 "aliases": [ 00:37:16.587 "lvs/lvol" 00:37:16.587 ], 00:37:16.587 "product_name": "Logical Volume", 00:37:16.587 "block_size": 4096, 00:37:16.587 "num_blocks": 38912, 00:37:16.587 "uuid": "6f3f3c98-7840-4bf7-ad2c-57effdc21d52", 00:37:16.587 "assigned_rate_limits": { 00:37:16.587 "rw_ios_per_sec": 0, 00:37:16.587 "rw_mbytes_per_sec": 0, 00:37:16.587 "r_mbytes_per_sec": 0, 00:37:16.587 "w_mbytes_per_sec": 0 00:37:16.587 }, 00:37:16.587 "claimed": false, 00:37:16.587 "zoned": false, 00:37:16.587 "supported_io_types": { 00:37:16.587 "read": true, 00:37:16.587 "write": true, 00:37:16.587 "unmap": true, 00:37:16.587 "flush": false, 00:37:16.587 "reset": true, 00:37:16.587 "nvme_admin": false, 00:37:16.587 "nvme_io": false, 00:37:16.587 "nvme_io_md": false, 00:37:16.587 "write_zeroes": true, 00:37:16.587 "zcopy": false, 00:37:16.587 "get_zone_info": false, 00:37:16.587 "zone_management": false, 00:37:16.587 "zone_append": false, 00:37:16.587 "compare": false, 00:37:16.587 "compare_and_write": false, 00:37:16.587 "abort": false, 00:37:16.587 "seek_hole": true, 00:37:16.587 "seek_data": true, 00:37:16.587 "copy": false, 00:37:16.587 "nvme_iov_md": false 00:37:16.587 }, 00:37:16.587 "driver_specific": { 00:37:16.587 "lvol": { 00:37:16.587 "lvol_store_uuid": "6a7e6d4c-0e71-40db-af9f-55632b0f7f51", 00:37:16.587 "base_bdev": "aio_bdev", 00:37:16.587 "thin_provision": false, 00:37:16.587 "num_allocated_clusters": 38, 00:37:16.587 "snapshot": false, 00:37:16.587 "clone": false, 00:37:16.587 "esnap_clone": false 00:37:16.587 } 00:37:16.587 } 00:37:16.587 } 00:37:16.587 ] 00:37:16.587 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:37:16.587 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:16.587 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:16.587 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:16.587 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:16.587 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:16.847 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:16.847 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6f3f3c98-7840-4bf7-ad2c-57effdc21d52 00:37:17.123 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6a7e6d4c-0e71-40db-af9f-55632b0f7f51 00:37:17.123 06:48:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:17.450 00:37:17.450 real 0m17.417s 00:37:17.450 user 0m35.172s 00:37:17.450 sys 0m3.240s 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:17.450 ************************************ 00:37:17.450 END TEST lvs_grow_dirty 00:37:17.450 ************************************ 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:17.450 nvmf_trace.0 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.450 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.450 rmmod nvme_tcp 00:37:17.450 rmmod nvme_fabrics 00:37:17.450 rmmod nvme_keyring 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2953437 ']' 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2953437 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2953437 ']' 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2953437 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2953437 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2953437' 00:37:17.762 killing process with pid 2953437 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2953437 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2953437 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:17.762 06:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.305 00:37:20.305 real 0m44.699s 00:37:20.305 user 0m53.797s 00:37:20.305 sys 0m10.772s 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:20.305 ************************************ 00:37:20.305 END TEST nvmf_lvs_grow 00:37:20.305 ************************************ 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:20.305 ************************************ 00:37:20.305 START TEST nvmf_bdev_io_wait 00:37:20.305 ************************************ 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:20.305 * Looking for test storage... 00:37:20.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.305 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:20.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.306 --rc genhtml_branch_coverage=1 00:37:20.306 --rc genhtml_function_coverage=1 00:37:20.306 --rc genhtml_legend=1 00:37:20.306 --rc geninfo_all_blocks=1 00:37:20.306 --rc geninfo_unexecuted_blocks=1 00:37:20.306 00:37:20.306 ' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:20.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.306 --rc genhtml_branch_coverage=1 00:37:20.306 --rc genhtml_function_coverage=1 00:37:20.306 --rc genhtml_legend=1 00:37:20.306 --rc geninfo_all_blocks=1 00:37:20.306 --rc geninfo_unexecuted_blocks=1 00:37:20.306 00:37:20.306 ' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:20.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.306 --rc genhtml_branch_coverage=1 00:37:20.306 --rc genhtml_function_coverage=1 00:37:20.306 --rc genhtml_legend=1 00:37:20.306 --rc geninfo_all_blocks=1 00:37:20.306 --rc geninfo_unexecuted_blocks=1 00:37:20.306 00:37:20.306 ' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:20.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.306 --rc genhtml_branch_coverage=1 00:37:20.306 --rc genhtml_function_coverage=1 00:37:20.306 --rc genhtml_legend=1 00:37:20.306 --rc geninfo_all_blocks=1 00:37:20.306 --rc geninfo_unexecuted_blocks=1 00:37:20.306 00:37:20.306 ' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:37:20.306 06:48:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:28.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:28.439 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:28.439 Found net devices under 0000:31:00.0: cvl_0_0 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:28.439 Found net devices under 0000:31:00.1: cvl_0_1 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:28.439 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:28.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:28.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:37:28.440 00:37:28.440 --- 10.0.0.2 ping statistics --- 00:37:28.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.440 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:28.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:28.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:37:28.440 00:37:28.440 --- 10.0.0.1 ping statistics --- 00:37:28.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:28.440 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2958371 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2958371 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2958371 ']' 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:28.440 06:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.440 [2024-11-20 06:48:47.579398] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:28.440 [2024-11-20 06:48:47.580604] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:28.440 [2024-11-20 06:48:47.580656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.440 [2024-11-20 06:48:47.684958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:28.440 [2024-11-20 06:48:47.741006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:28.440 [2024-11-20 06:48:47.741063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:28.440 [2024-11-20 06:48:47.741072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:28.440 [2024-11-20 06:48:47.741079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:28.440 [2024-11-20 06:48:47.741086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:28.440 [2024-11-20 06:48:47.743532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.440 [2024-11-20 06:48:47.743695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:28.440 [2024-11-20 06:48:47.743854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:28.440 [2024-11-20 06:48:47.743855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.440 [2024-11-20 06:48:47.744332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.700 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.700 [2024-11-20 06:48:48.525598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:28.700 [2024-11-20 06:48:48.526380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:28.700 [2024-11-20 06:48:48.526418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:28.701 [2024-11-20 06:48:48.526598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.701 [2024-11-20 06:48:48.536853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.701 Malloc0 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:28.701 [2024-11-20 06:48:48.608988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.701 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2958562 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2958564 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.961 { 00:37:28.961 "params": { 00:37:28.961 "name": "Nvme$subsystem", 00:37:28.961 "trtype": "$TEST_TRANSPORT", 00:37:28.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.961 "adrfam": "ipv4", 00:37:28.961 "trsvcid": "$NVMF_PORT", 00:37:28.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.961 "hdgst": ${hdgst:-false}, 00:37:28.961 "ddgst": ${ddgst:-false} 00:37:28.961 }, 00:37:28.961 "method": "bdev_nvme_attach_controller" 00:37:28.961 } 00:37:28.961 EOF 00:37:28.961 )") 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2958566 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.961 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2958569 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.962 { 00:37:28.962 "params": { 00:37:28.962 "name": "Nvme$subsystem", 00:37:28.962 "trtype": "$TEST_TRANSPORT", 00:37:28.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.962 "adrfam": "ipv4", 00:37:28.962 "trsvcid": "$NVMF_PORT", 00:37:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.962 "hdgst": ${hdgst:-false}, 00:37:28.962 "ddgst": ${ddgst:-false} 00:37:28.962 }, 00:37:28.962 "method": "bdev_nvme_attach_controller" 00:37:28.962 } 00:37:28.962 EOF 00:37:28.962 )") 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.962 { 00:37:28.962 "params": { 00:37:28.962 "name": "Nvme$subsystem", 00:37:28.962 "trtype": "$TEST_TRANSPORT", 00:37:28.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.962 "adrfam": "ipv4", 00:37:28.962 "trsvcid": "$NVMF_PORT", 00:37:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.962 "hdgst": ${hdgst:-false}, 00:37:28.962 "ddgst": ${ddgst:-false} 00:37:28.962 }, 00:37:28.962 "method": "bdev_nvme_attach_controller" 00:37:28.962 } 00:37:28.962 EOF 00:37:28.962 )") 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.962 { 00:37:28.962 "params": { 00:37:28.962 "name": "Nvme$subsystem", 00:37:28.962 "trtype": "$TEST_TRANSPORT", 00:37:28.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.962 "adrfam": "ipv4", 00:37:28.962 "trsvcid": "$NVMF_PORT", 00:37:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.962 "hdgst": ${hdgst:-false}, 00:37:28.962 "ddgst": ${ddgst:-false} 00:37:28.962 }, 00:37:28.962 "method": "bdev_nvme_attach_controller" 00:37:28.962 } 00:37:28.962 EOF 00:37:28.962 )") 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2958562 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.962 "params": { 00:37:28.962 "name": "Nvme1", 00:37:28.962 "trtype": "tcp", 00:37:28.962 "traddr": "10.0.0.2", 00:37:28.962 "adrfam": "ipv4", 00:37:28.962 "trsvcid": "4420", 00:37:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.962 "hdgst": false, 00:37:28.962 "ddgst": false 00:37:28.962 }, 00:37:28.962 "method": "bdev_nvme_attach_controller" 00:37:28.962 }' 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.962 "params": { 00:37:28.962 "name": "Nvme1", 00:37:28.962 "trtype": "tcp", 00:37:28.962 "traddr": "10.0.0.2", 00:37:28.962 "adrfam": "ipv4", 00:37:28.962 "trsvcid": "4420", 00:37:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.962 "hdgst": false, 00:37:28.962 "ddgst": false 00:37:28.962 }, 00:37:28.962 "method": "bdev_nvme_attach_controller" 00:37:28.962 }' 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.962 "params": { 00:37:28.962 "name": "Nvme1", 00:37:28.962 "trtype": "tcp", 00:37:28.962 "traddr": "10.0.0.2", 00:37:28.962 "adrfam": "ipv4", 00:37:28.962 "trsvcid": "4420", 00:37:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.962 "hdgst": false, 00:37:28.962 "ddgst": false 00:37:28.962 }, 00:37:28.962 "method": "bdev_nvme_attach_controller" 00:37:28.962 }' 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:28.962 06:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.962 "params": { 00:37:28.962 "name": "Nvme1", 00:37:28.962 "trtype": "tcp", 00:37:28.962 "traddr": "10.0.0.2", 00:37:28.962 "adrfam": "ipv4", 00:37:28.962 "trsvcid": "4420", 00:37:28.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.962 "hdgst": false, 00:37:28.962 "ddgst": false 00:37:28.962 }, 00:37:28.962 "method": "bdev_nvme_attach_controller" 00:37:28.962 }' 00:37:28.962 [2024-11-20 06:48:48.650205] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:28.962 [2024-11-20 06:48:48.650273] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:37:28.962 [2024-11-20 06:48:48.670925] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:28.962 [2024-11-20 06:48:48.670988] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:37:28.962 [2024-11-20 06:48:48.676114] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:28.962 [2024-11-20 06:48:48.676198] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:37:28.962 [2024-11-20 06:48:48.679258] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:28.962 [2024-11-20 06:48:48.679325] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:37:28.962 [2024-11-20 06:48:48.834952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.962 [2024-11-20 06:48:48.875822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:29.223 [2024-11-20 06:48:48.884242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.223 [2024-11-20 06:48:48.926428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:29.223 [2024-11-20 06:48:48.976816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.223 [2024-11-20 06:48:49.019710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:29.223 [2024-11-20 06:48:49.038502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.223 [2024-11-20 06:48:49.077610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:29.223 Running I/O for 1 seconds... 00:37:29.483 Running I/O for 1 seconds... 00:37:29.483 Running I/O for 1 seconds... 00:37:29.483 Running I/O for 1 seconds... 00:37:30.424 7836.00 IOPS, 30.61 MiB/s 00:37:30.424 Latency(us) 00:37:30.424 [2024-11-20T05:48:50.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.424 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:37:30.424 Nvme1n1 : 1.02 7864.12 30.72 0.00 0.00 16129.36 4915.20 27962.03 00:37:30.424 [2024-11-20T05:48:50.344Z] =================================================================================================================== 00:37:30.424 [2024-11-20T05:48:50.344Z] Total : 7864.12 30.72 0.00 0.00 16129.36 4915.20 27962.03 00:37:30.424 7304.00 IOPS, 28.53 MiB/s 00:37:30.424 Latency(us) 00:37:30.424 [2024-11-20T05:48:50.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.424 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:37:30.424 Nvme1n1 : 1.01 7407.72 28.94 0.00 0.00 17223.46 5079.04 26651.31 00:37:30.424 [2024-11-20T05:48:50.344Z] =================================================================================================================== 00:37:30.424 [2024-11-20T05:48:50.344Z] Total : 7407.72 28.94 0.00 0.00 17223.46 5079.04 26651.31 00:37:30.424 11483.00 IOPS, 44.86 MiB/s 00:37:30.424 Latency(us) 00:37:30.424 [2024-11-20T05:48:50.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.424 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:37:30.424 Nvme1n1 : 1.01 11540.83 45.08 0.00 0.00 11054.60 4887.89 16711.68 00:37:30.424 [2024-11-20T05:48:50.344Z] =================================================================================================================== 00:37:30.424 [2024-11-20T05:48:50.344Z] Total : 11540.83 45.08 0.00 0.00 11054.60 4887.89 16711.68 00:37:30.424 188056.00 IOPS, 734.59 MiB/s 00:37:30.424 Latency(us) 00:37:30.424 [2024-11-20T05:48:50.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.424 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:37:30.424 Nvme1n1 : 1.00 187681.69 733.13 0.00 0.00 678.44 300.37 1966.08 00:37:30.424 [2024-11-20T05:48:50.344Z] =================================================================================================================== 00:37:30.424 [2024-11-20T05:48:50.344Z] Total : 187681.69 733.13 0.00 0.00 678.44 300.37 1966.08 00:37:30.424 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2958564 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2958566 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2958569 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:30.685 rmmod nvme_tcp 00:37:30.685 rmmod nvme_fabrics 00:37:30.685 rmmod nvme_keyring 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2958371 ']' 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2958371 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2958371 ']' 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2958371 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2958371 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2958371' 00:37:30.685 killing process with pid 2958371 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2958371 00:37:30.685 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2958371 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:30.946 06:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:33.487 00:37:33.487 real 0m13.054s 00:37:33.487 user 0m15.534s 00:37:33.487 sys 0m7.670s 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:33.487 ************************************ 00:37:33.487 END TEST nvmf_bdev_io_wait 00:37:33.487 ************************************ 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:33.487 ************************************ 00:37:33.487 START TEST nvmf_queue_depth 00:37:33.487 ************************************ 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:33.487 * Looking for test storage... 00:37:33.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:37:33.487 06:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:33.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.487 --rc genhtml_branch_coverage=1 00:37:33.487 --rc genhtml_function_coverage=1 00:37:33.487 --rc genhtml_legend=1 00:37:33.487 --rc geninfo_all_blocks=1 00:37:33.487 --rc geninfo_unexecuted_blocks=1 00:37:33.487 00:37:33.487 ' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:33.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.487 --rc genhtml_branch_coverage=1 00:37:33.487 --rc genhtml_function_coverage=1 00:37:33.487 --rc genhtml_legend=1 00:37:33.487 --rc geninfo_all_blocks=1 00:37:33.487 --rc geninfo_unexecuted_blocks=1 00:37:33.487 00:37:33.487 ' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:33.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.487 --rc genhtml_branch_coverage=1 00:37:33.487 --rc genhtml_function_coverage=1 00:37:33.487 --rc genhtml_legend=1 00:37:33.487 --rc geninfo_all_blocks=1 00:37:33.487 --rc geninfo_unexecuted_blocks=1 00:37:33.487 00:37:33.487 ' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:33.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.487 --rc genhtml_branch_coverage=1 00:37:33.487 --rc genhtml_function_coverage=1 00:37:33.487 --rc genhtml_legend=1 00:37:33.487 --rc geninfo_all_blocks=1 00:37:33.487 --rc geninfo_unexecuted_blocks=1 00:37:33.487 00:37:33.487 ' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:33.487 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:37:33.488 06:48:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:41.716 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:41.716 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:41.716 Found net devices under 0000:31:00.0: cvl_0_0 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:41.716 Found net devices under 0000:31:00.1: cvl_0_1 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:41.716 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:41.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:41.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:37:41.717 00:37:41.717 --- 10.0.0.2 ping statistics --- 00:37:41.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.717 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:41.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:41.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:37:41.717 00:37:41.717 --- 10.0.0.1 ping statistics --- 00:37:41.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:41.717 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2963253 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2963253 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2963253 ']' 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:41.717 06:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.717 [2024-11-20 06:49:00.766117] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:41.717 [2024-11-20 06:49:00.767286] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:41.717 [2024-11-20 06:49:00.767338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:41.717 [2024-11-20 06:49:00.870520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.717 [2024-11-20 06:49:00.920538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:41.717 [2024-11-20 06:49:00.920590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:41.717 [2024-11-20 06:49:00.920599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:41.717 [2024-11-20 06:49:00.920606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:41.717 [2024-11-20 06:49:00.920612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:41.717 [2024-11-20 06:49:00.921404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.717 [2024-11-20 06:49:00.998734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:41.717 [2024-11-20 06:49:00.999033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.717 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.978 [2024-11-20 06:49:01.634251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.978 Malloc0 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.978 [2024-11-20 06:49:01.714419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2963301 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2963301 /var/tmp/bdevperf.sock 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2963301 ']' 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:41.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:41.978 06:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:41.978 [2024-11-20 06:49:01.775123] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:41.978 [2024-11-20 06:49:01.775189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963301 ] 00:37:41.978 [2024-11-20 06:49:01.870850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.239 [2024-11-20 06:49:01.924094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.810 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:42.810 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:37:42.810 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:42.810 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.810 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:43.071 NVMe0n1 00:37:43.071 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.071 06:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:43.071 Running I/O for 10 seconds... 00:37:44.952 8238.00 IOPS, 32.18 MiB/s [2024-11-20T05:49:06.256Z] 8685.50 IOPS, 33.93 MiB/s [2024-11-20T05:49:07.207Z] 9026.00 IOPS, 35.26 MiB/s [2024-11-20T05:49:08.146Z] 10191.50 IOPS, 39.81 MiB/s [2024-11-20T05:49:09.087Z] 10856.20 IOPS, 42.41 MiB/s [2024-11-20T05:49:10.031Z] 11274.67 IOPS, 44.04 MiB/s [2024-11-20T05:49:10.972Z] 11624.43 IOPS, 45.41 MiB/s [2024-11-20T05:49:11.912Z] 11883.75 IOPS, 46.42 MiB/s [2024-11-20T05:49:13.294Z] 12068.78 IOPS, 47.14 MiB/s [2024-11-20T05:49:13.294Z] 12263.50 IOPS, 47.90 MiB/s 00:37:53.374 Latency(us) 00:37:53.374 [2024-11-20T05:49:13.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.374 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:37:53.374 Verification LBA range: start 0x0 length 0x4000 00:37:53.374 NVMe0n1 : 10.05 12288.56 48.00 0.00 0.00 83026.85 18896.21 76458.67 00:37:53.374 [2024-11-20T05:49:13.294Z] =================================================================================================================== 00:37:53.374 [2024-11-20T05:49:13.294Z] Total : 12288.56 48.00 0.00 0.00 83026.85 18896.21 76458.67 00:37:53.374 { 00:37:53.374 "results": [ 00:37:53.374 { 00:37:53.374 "job": "NVMe0n1", 00:37:53.374 "core_mask": "0x1", 00:37:53.374 "workload": "verify", 00:37:53.374 "status": "finished", 00:37:53.374 "verify_range": { 00:37:53.374 "start": 0, 00:37:53.374 "length": 16384 00:37:53.374 }, 00:37:53.374 "queue_depth": 1024, 00:37:53.374 "io_size": 4096, 00:37:53.374 "runtime": 10.053744, 00:37:53.374 "iops": 12288.556382577475, 00:37:53.374 "mibps": 48.00217336944326, 00:37:53.374 "io_failed": 0, 00:37:53.374 "io_timeout": 0, 00:37:53.374 "avg_latency_us": 83026.84808098468, 00:37:53.374 "min_latency_us": 18896.213333333333, 00:37:53.374 "max_latency_us": 76458.66666666667 00:37:53.374 } 00:37:53.374 ], 00:37:53.374 "core_count": 1 00:37:53.374 } 00:37:53.374 06:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2963301 00:37:53.374 06:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2963301 ']' 00:37:53.374 06:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2963301 00:37:53.374 06:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:37:53.375 06:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:53.375 06:49:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2963301 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2963301' 00:37:53.375 killing process with pid 2963301 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2963301 00:37:53.375 Received shutdown signal, test time was about 10.000000 seconds 00:37:53.375 00:37:53.375 Latency(us) 00:37:53.375 [2024-11-20T05:49:13.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.375 [2024-11-20T05:49:13.295Z] =================================================================================================================== 00:37:53.375 [2024-11-20T05:49:13.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2963301 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:53.375 rmmod nvme_tcp 00:37:53.375 rmmod nvme_fabrics 00:37:53.375 rmmod nvme_keyring 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2963253 ']' 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2963253 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2963253 ']' 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2963253 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2963253 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2963253' 00:37:53.375 killing process with pid 2963253 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2963253 00:37:53.375 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2963253 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:53.634 06:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:55.544 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:55.544 00:37:55.544 real 0m22.582s 00:37:55.544 user 0m24.686s 00:37:55.544 sys 0m7.540s 00:37:55.544 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:55.544 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:37:55.544 ************************************ 00:37:55.544 END TEST nvmf_queue_depth 00:37:55.544 ************************************ 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:55.805 ************************************ 00:37:55.805 START TEST nvmf_target_multipath 00:37:55.805 ************************************ 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:37:55.805 * Looking for test storage... 00:37:55.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:37:55.805 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.067 --rc genhtml_branch_coverage=1 00:37:56.067 --rc genhtml_function_coverage=1 00:37:56.067 --rc genhtml_legend=1 00:37:56.067 --rc geninfo_all_blocks=1 00:37:56.067 --rc geninfo_unexecuted_blocks=1 00:37:56.067 00:37:56.067 ' 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.067 --rc genhtml_branch_coverage=1 00:37:56.067 --rc genhtml_function_coverage=1 00:37:56.067 --rc genhtml_legend=1 00:37:56.067 --rc geninfo_all_blocks=1 00:37:56.067 --rc geninfo_unexecuted_blocks=1 00:37:56.067 00:37:56.067 ' 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.067 --rc genhtml_branch_coverage=1 00:37:56.067 --rc genhtml_function_coverage=1 00:37:56.067 --rc genhtml_legend=1 00:37:56.067 --rc geninfo_all_blocks=1 00:37:56.067 --rc geninfo_unexecuted_blocks=1 00:37:56.067 00:37:56.067 ' 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:56.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.067 --rc genhtml_branch_coverage=1 00:37:56.067 --rc genhtml_function_coverage=1 00:37:56.067 --rc genhtml_legend=1 00:37:56.067 --rc geninfo_all_blocks=1 00:37:56.067 --rc geninfo_unexecuted_blocks=1 00:37:56.067 00:37:56.067 ' 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.067 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.068 06:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:04.227 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.227 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:04.228 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:04.228 Found net devices under 0000:31:00.0: cvl_0_0 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:04.228 Found net devices under 0000:31:00.1: cvl_0_1 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:04.228 06:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:04.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:04.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:38:04.228 00:38:04.228 --- 10.0.0.2 ping statistics --- 00:38:04.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.228 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:04.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:04.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:38:04.228 00:38:04.228 --- 10.0.0.1 ping statistics --- 00:38:04.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:04.228 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:04.228 only one NIC for nvmf test 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:04.228 rmmod nvme_tcp 00:38:04.228 rmmod nvme_fabrics 00:38:04.228 rmmod nvme_keyring 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:04.228 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:04.229 06:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:05.614 00:38:05.614 real 0m9.761s 00:38:05.614 user 0m2.171s 00:38:05.614 sys 0m5.527s 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:05.614 ************************************ 00:38:05.614 END TEST nvmf_target_multipath 00:38:05.614 ************************************ 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:05.614 ************************************ 00:38:05.614 START TEST nvmf_zcopy 00:38:05.614 ************************************ 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:05.614 * Looking for test storage... 00:38:05.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:38:05.614 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:05.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.875 --rc genhtml_branch_coverage=1 00:38:05.875 --rc genhtml_function_coverage=1 00:38:05.875 --rc genhtml_legend=1 00:38:05.875 --rc geninfo_all_blocks=1 00:38:05.875 --rc geninfo_unexecuted_blocks=1 00:38:05.875 00:38:05.875 ' 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:05.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.875 --rc genhtml_branch_coverage=1 00:38:05.875 --rc genhtml_function_coverage=1 00:38:05.875 --rc genhtml_legend=1 00:38:05.875 --rc geninfo_all_blocks=1 00:38:05.875 --rc geninfo_unexecuted_blocks=1 00:38:05.875 00:38:05.875 ' 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:05.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.875 --rc genhtml_branch_coverage=1 00:38:05.875 --rc genhtml_function_coverage=1 00:38:05.875 --rc genhtml_legend=1 00:38:05.875 --rc geninfo_all_blocks=1 00:38:05.875 --rc geninfo_unexecuted_blocks=1 00:38:05.875 00:38:05.875 ' 00:38:05.875 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:05.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:05.876 --rc genhtml_branch_coverage=1 00:38:05.876 --rc genhtml_function_coverage=1 00:38:05.876 --rc genhtml_legend=1 00:38:05.876 --rc geninfo_all_blocks=1 00:38:05.876 --rc geninfo_unexecuted_blocks=1 00:38:05.876 00:38:05.876 ' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:05.876 06:49:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:14.017 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:14.018 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:14.018 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:14.018 Found net devices under 0000:31:00.0: cvl_0_0 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:14.018 Found net devices under 0000:31:00.1: cvl_0_1 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:14.018 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:14.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:14.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:38:14.019 00:38:14.019 --- 10.0.0.2 ping statistics --- 00:38:14.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.019 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:14.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:14.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:38:14.019 00:38:14.019 --- 10.0.0.1 ping statistics --- 00:38:14.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.019 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:14.019 06:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2973812 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2973812 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2973812 ']' 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.019 [2024-11-20 06:49:33.090956] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:14.019 [2024-11-20 06:49:33.092104] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:14.019 [2024-11-20 06:49:33.092153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.019 [2024-11-20 06:49:33.193367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.019 [2024-11-20 06:49:33.243946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.019 [2024-11-20 06:49:33.243994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.019 [2024-11-20 06:49:33.244003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.019 [2024-11-20 06:49:33.244010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.019 [2024-11-20 06:49:33.244017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.019 [2024-11-20 06:49:33.244794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.019 [2024-11-20 06:49:33.322598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:14.019 [2024-11-20 06:49:33.322918] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.019 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 [2024-11-20 06:49:33.937600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 [2024-11-20 06:49:33.965849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.280 06:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 malloc0 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.280 { 00:38:14.280 "params": { 00:38:14.280 "name": "Nvme$subsystem", 00:38:14.280 "trtype": "$TEST_TRANSPORT", 00:38:14.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.280 "adrfam": "ipv4", 00:38:14.280 "trsvcid": "$NVMF_PORT", 00:38:14.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.280 "hdgst": ${hdgst:-false}, 00:38:14.280 "ddgst": ${ddgst:-false} 00:38:14.280 }, 00:38:14.280 "method": "bdev_nvme_attach_controller" 00:38:14.280 } 00:38:14.280 EOF 00:38:14.280 )") 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:14.280 06:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:14.280 "params": { 00:38:14.280 "name": "Nvme1", 00:38:14.280 "trtype": "tcp", 00:38:14.280 "traddr": "10.0.0.2", 00:38:14.280 "adrfam": "ipv4", 00:38:14.280 "trsvcid": "4420", 00:38:14.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:14.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:14.280 "hdgst": false, 00:38:14.280 "ddgst": false 00:38:14.280 }, 00:38:14.280 "method": "bdev_nvme_attach_controller" 00:38:14.280 }' 00:38:14.280 [2024-11-20 06:49:34.065160] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:14.280 [2024-11-20 06:49:34.065209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974052 ] 00:38:14.280 [2024-11-20 06:49:34.154966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.280 [2024-11-20 06:49:34.191731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.540 Running I/O for 10 seconds... 00:38:16.865 6590.00 IOPS, 51.48 MiB/s [2024-11-20T05:49:37.725Z] 6582.50 IOPS, 51.43 MiB/s [2024-11-20T05:49:38.667Z] 6624.00 IOPS, 51.75 MiB/s [2024-11-20T05:49:39.608Z] 6861.50 IOPS, 53.61 MiB/s [2024-11-20T05:49:40.547Z] 7424.60 IOPS, 58.00 MiB/s [2024-11-20T05:49:41.490Z] 7801.33 IOPS, 60.95 MiB/s [2024-11-20T05:49:42.430Z] 8064.86 IOPS, 63.01 MiB/s [2024-11-20T05:49:43.813Z] 8266.75 IOPS, 64.58 MiB/s [2024-11-20T05:49:44.754Z] 8425.00 IOPS, 65.82 MiB/s [2024-11-20T05:49:44.755Z] 8549.10 IOPS, 66.79 MiB/s 00:38:24.835 Latency(us) 00:38:24.835 [2024-11-20T05:49:44.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.835 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:24.835 Verification LBA range: start 0x0 length 0x1000 00:38:24.835 Nvme1n1 : 10.01 8553.61 66.83 0.00 0.00 14918.04 1030.83 26214.40 00:38:24.835 [2024-11-20T05:49:44.755Z] =================================================================================================================== 00:38:24.835 [2024-11-20T05:49:44.755Z] Total : 8553.61 66.83 0.00 0.00 14918.04 1030.83 26214.40 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2976053 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:24.835 { 00:38:24.835 "params": { 00:38:24.835 "name": "Nvme$subsystem", 00:38:24.835 "trtype": "$TEST_TRANSPORT", 00:38:24.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.835 "adrfam": "ipv4", 00:38:24.835 "trsvcid": "$NVMF_PORT", 00:38:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.835 "hdgst": ${hdgst:-false}, 00:38:24.835 "ddgst": ${ddgst:-false} 00:38:24.835 }, 00:38:24.835 "method": "bdev_nvme_attach_controller" 00:38:24.835 } 00:38:24.835 EOF 00:38:24.835 )") 00:38:24.835 [2024-11-20 06:49:44.517193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.517220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:24.835 06:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:24.835 "params": { 00:38:24.835 "name": "Nvme1", 00:38:24.835 "trtype": "tcp", 00:38:24.835 "traddr": "10.0.0.2", 00:38:24.835 "adrfam": "ipv4", 00:38:24.835 "trsvcid": "4420", 00:38:24.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.835 "hdgst": false, 00:38:24.835 "ddgst": false 00:38:24.835 }, 00:38:24.835 "method": "bdev_nvme_attach_controller" 00:38:24.835 }' 00:38:24.835 [2024-11-20 06:49:44.529161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.529169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.541160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.541167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.553159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.553165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.565159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.565166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.570520] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:24.835 [2024-11-20 06:49:44.570567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976053 ] 00:38:24.835 [2024-11-20 06:49:44.577159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.577166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.589159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.589166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.601159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.601166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.613159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.613165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.625159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.625166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.637159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.637166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.649159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.649165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.654820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.835 [2024-11-20 06:49:44.661161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.661168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.673159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.673168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.684389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.835 [2024-11-20 06:49:44.685160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.685167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.697166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.697175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.709166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.709183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.721160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.721172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.733161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.733170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:24.835 [2024-11-20 06:49:44.745160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:24.835 [2024-11-20 06:49:44.745167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.757169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.757186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.769162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.769171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.781161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.781171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.793160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.793167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.805159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.805165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.817158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.817165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.829160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.829169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.841160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.841168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.853159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.853165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.865159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.865166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.877160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.877168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.889159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.889166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.901159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.901166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.913159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.913166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.925159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.925167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.937159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.937170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.949159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.949166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.961159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.961166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:44.973167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.973183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 Running I/O for 5 seconds... 00:38:25.096 [2024-11-20 06:49:44.988076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:44.988092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.096 [2024-11-20 06:49:45.001407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.096 [2024-11-20 06:49:45.001423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.016178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.016193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.029140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.029156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.041886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.041900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.056118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.056133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.069310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.069324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.083928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.083943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.097026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.097041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.109787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.109801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.124002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.124017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.137103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.137119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.149832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.149846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.162027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.162041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.176493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.176508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.189617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.189631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.204186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.204202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.217382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.217396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.232278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.232293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.245534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.245548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.357 [2024-11-20 06:49:45.260174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.357 [2024-11-20 06:49:45.260188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.273290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.273304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.288055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.288070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.301337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.301351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.316251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.316265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.329224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.329238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.341925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.341939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.356414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.356430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.369480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.369494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.383739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.383758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.396815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.396830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.409514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.409528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.424051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.424066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.437054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.437068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.450236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.450251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.464263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.464277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.477561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.477576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.492128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.492143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.505099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.505115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.517442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.517456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.618 [2024-11-20 06:49:45.531933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.618 [2024-11-20 06:49:45.531948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.544966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.544981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.557489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.557503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.571922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.571936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.585054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.585069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.597608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.597622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.612017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.612031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.624881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.624896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.637548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.637562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.651882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.651897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.664776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.664790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.677522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.677536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.692047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.692062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.704819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.704834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.716963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.716977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.730094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.730108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.744137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.744152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.757316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.757331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.772246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.772262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:25.879 [2024-11-20 06:49:45.785362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:25.879 [2024-11-20 06:49:45.785376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.800059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.800074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.813164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.813179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.825798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.825812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.837594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.837608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.852075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.852089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.865127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.865141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.877672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.877686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.892215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.892231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.905194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.905209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.917887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.917901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.932826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.932841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.945838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.945856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.960397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.960412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.973268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.973282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:45.985760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:45.985774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 19049.00 IOPS, 148.82 MiB/s [2024-11-20T05:49:46.060Z] [2024-11-20 06:49:46.000391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:46.000405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:46.013309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:46.013323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:46.027743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:46.027762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:46.040595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:46.040610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.140 [2024-11-20 06:49:46.053394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.140 [2024-11-20 06:49:46.053408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.068051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.068067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.080896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.080910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.093676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.093690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.108091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.108105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.121270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.121284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.134025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.134039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.148008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.148023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.160742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.160761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.173474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.173488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.188054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.188069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.201200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.201219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.213637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.213651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.228203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.228218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.241165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.241180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.254148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.254162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.268094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.268109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.280811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.280826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.293352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.293366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.401 [2024-11-20 06:49:46.307965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.401 [2024-11-20 06:49:46.307979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.320956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.320971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.333662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.333675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.348046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.348060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.361236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.361250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.374171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.374185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.388360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.388374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.401398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.401412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.416070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.416084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.428635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.428650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.441734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.441753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.455906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.455924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.468831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.468845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.481784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.481798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.496222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.496237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.509540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.509554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.523895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.523909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.536576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.536590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.549204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.549218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.561171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.561185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.661 [2024-11-20 06:49:46.573589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.661 [2024-11-20 06:49:46.573603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.586507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.586521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.600545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.600560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.613563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.613577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.628146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.628160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.641195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.641209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.653189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.653203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.666084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.666098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.680483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.680498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.693404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.693418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.707956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.707970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.721057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.721071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.733397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.733410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.748051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.748065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.760970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.760984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.773835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.773850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.788239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.788253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.801404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.801418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.816306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.816320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:26.921 [2024-11-20 06:49:46.829478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:26.921 [2024-11-20 06:49:46.829492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.844208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.844223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.856952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.856967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.870273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.870287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.884565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.884580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.897709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.897724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.911925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.911939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.924627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.924642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.937546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.937560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.952625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.952640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.181 [2024-11-20 06:49:46.965550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.181 [2024-11-20 06:49:46.965564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:46.980066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:46.980081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 19103.50 IOPS, 149.25 MiB/s [2024-11-20T05:49:47.102Z] [2024-11-20 06:49:46.993346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:46.993360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:47.008227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:47.008241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:47.021184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:47.021198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:47.033919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:47.033933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:47.048435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:47.048448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:47.061655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:47.061670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:47.076163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:47.076178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.182 [2024-11-20 06:49:47.088788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.182 [2024-11-20 06:49:47.088803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.101315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.101330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.115910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.115924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.128978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.128993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.141889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.141903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.155947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.155961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.169126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.169141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.180993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.181008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.193841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.193854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.208171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.208189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.221207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.221222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.233166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.233181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.245450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.245464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.260206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.260220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.273072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.273088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.285809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.285823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.299965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.299980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.313113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.313128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.325800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.325814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.339942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.339956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.442 [2024-11-20 06:49:47.353169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.442 [2024-11-20 06:49:47.353184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.366135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.366150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.380822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.380836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.394138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.394152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.408037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.408051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.421115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.421130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.433897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.433911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.448375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.448390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.461387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.461406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.474371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.474385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.488835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.488850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.501751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.501765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.516431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.516447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.529488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.529502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.544279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.544294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.557490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.703 [2024-11-20 06:49:47.557504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.703 [2024-11-20 06:49:47.572252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.704 [2024-11-20 06:49:47.572268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.704 [2024-11-20 06:49:47.585498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.704 [2024-11-20 06:49:47.585512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.704 [2024-11-20 06:49:47.600492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.704 [2024-11-20 06:49:47.600507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.704 [2024-11-20 06:49:47.613309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.704 [2024-11-20 06:49:47.613325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.626091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.626106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.640030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.640045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.653105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.653121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.665893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.665907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.680715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.680730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.693584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.693598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.708803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.708817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.721918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.721936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.736580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.736594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.749601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.749616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.763954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.763968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.776702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.776717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.789855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.789869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.804831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.804846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.818061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.818075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.832318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.832333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.965 [2024-11-20 06:49:47.845164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.965 [2024-11-20 06:49:47.845179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.966 [2024-11-20 06:49:47.858264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.966 [2024-11-20 06:49:47.858278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:27.966 [2024-11-20 06:49:47.872328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:27.966 [2024-11-20 06:49:47.872343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.884985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.885000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.898201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.898215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.912821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.912836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.925629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.925643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.940153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.940168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.953239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.953253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.966238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.966252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:47.980172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.980190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 19105.67 IOPS, 149.26 MiB/s [2024-11-20T05:49:48.147Z] [2024-11-20 06:49:47.993005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:47.993020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.005930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.005944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.020706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.020721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.033613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.033626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.048103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.048117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.061082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.061097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.073831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.073845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.088136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.088151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.101209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.101223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.113838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.113852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.127938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.127952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.227 [2024-11-20 06:49:48.140792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.227 [2024-11-20 06:49:48.140806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.154267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.154281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.168230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.168245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.181149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.181163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.193913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.193926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.208776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.208790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.221814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.221828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.236025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.236040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.249293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.249307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.262013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.262026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.276617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.276632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.290042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.290057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.304425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.304440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.317439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.317453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.332283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.332298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.345547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.345561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.359939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.359953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.372906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.372920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.385739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.385757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.488 [2024-11-20 06:49:48.399975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.488 [2024-11-20 06:49:48.399989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.412830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.412845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.425677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.425690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.440607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.440621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.453840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.453854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.468576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.468590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.481733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.481752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.496133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.496148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.509281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.509296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.522173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.522188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.536076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.536090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.549468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.549482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.749 [2024-11-20 06:49:48.564409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.749 [2024-11-20 06:49:48.564423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.750 [2024-11-20 06:49:48.577614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.750 [2024-11-20 06:49:48.577627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.750 [2024-11-20 06:49:48.592321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.750 [2024-11-20 06:49:48.592337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.750 [2024-11-20 06:49:48.605413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.750 [2024-11-20 06:49:48.605428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.750 [2024-11-20 06:49:48.618261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.750 [2024-11-20 06:49:48.618275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.750 [2024-11-20 06:49:48.632343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.750 [2024-11-20 06:49:48.632358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.750 [2024-11-20 06:49:48.645481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.750 [2024-11-20 06:49:48.645495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:28.750 [2024-11-20 06:49:48.660660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:28.750 [2024-11-20 06:49:48.660674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.673953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.673967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.687875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.687889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.700836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.700850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.713743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.713762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.728183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.728197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.741155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.741169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.754510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.754524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.768735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.768754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.782167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.782182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.796697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.796711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.809676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.809690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.824250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.824265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.837149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.837164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.850279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.850294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.864735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.864754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.877671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.877685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.892465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.892480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.905270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.905285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.011 [2024-11-20 06:49:48.918162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.011 [2024-11-20 06:49:48.918177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:48.932313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:48.932328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:48.945450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:48.945463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:48.959810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:48.959824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:48.972862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:48.972877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:48.985611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:48.985625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 19097.75 IOPS, 149.20 MiB/s [2024-11-20T05:49:49.193Z] [2024-11-20 06:49:49.000346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.000365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.013567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.013582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.028505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.028520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.041398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.041412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.054616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.054631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.068802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.068817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.081781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.081795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.096320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.096335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.109261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.109276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.122099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.122113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.136540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.136555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.149597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.149611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.164788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.164802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.273 [2024-11-20 06:49:49.177569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.273 [2024-11-20 06:49:49.177583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.192376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.192391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.205348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.205363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.218045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.218059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.232114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.232129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.245409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.245424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.258490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.258509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.272249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.272264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.285066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.285081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.298361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.298375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.312491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.312506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.325523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.325537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.340210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.340225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.353454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.353468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.368159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.368173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.381068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.381083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.394176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.394190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.408096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.408110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.421260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.421275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.434089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.434103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.534 [2024-11-20 06:49:49.448375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.534 [2024-11-20 06:49:49.448390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.461313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.461328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.474093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.474108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.488088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.488103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.501277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.501292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.514625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.514647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.528505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.528520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.541604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.541618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.556368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.556383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.569210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.569225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.582239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.582253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.596516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.596531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.609068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.609084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.622519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.622533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.636372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.636387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.649479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.649494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.664353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.664368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.677445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.677459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.692202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.692217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:29.795 [2024-11-20 06:49:49.705546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:29.795 [2024-11-20 06:49:49.705559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.720063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.720078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.733040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.733055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.746116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.746130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.760463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.760477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.773680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.773694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.788539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.788553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.801551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.801565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.815984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.815999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.829453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.829467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.844768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.844783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.857516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.857529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.872338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.872352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.885730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.885743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.900183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.900198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.913431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.913444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.928540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.928555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.941598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.941611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.956114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.956129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.056 [2024-11-20 06:49:49.969508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.056 [2024-11-20 06:49:49.969522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:49.984165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:49.984180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 19083.20 IOPS, 149.09 MiB/s [2024-11-20T05:49:50.237Z] [2024-11-20 06:49:49.996874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:49.996889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 00:38:30.317 Latency(us) 00:38:30.317 [2024-11-20T05:49:50.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.317 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:30.317 Nvme1n1 : 5.01 19085.58 149.11 0.00 0.00 6701.01 2812.59 11195.73 00:38:30.317 [2024-11-20T05:49:50.237Z] =================================================================================================================== 00:38:30.317 [2024-11-20T05:49:50.237Z] Total : 19085.58 149.11 0.00 0.00 6701.01 2812.59 11195.73 00:38:30.317 [2024-11-20 06:49:50.005167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.005181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.017165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.017177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.029167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.029178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.041166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.041178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.053165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.053177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.065161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.065171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.077160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.077169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.089163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.089174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 [2024-11-20 06:49:50.101160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:30.317 [2024-11-20 06:49:50.101169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:30.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2976053) - No such process 00:38:30.317 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2976053 00:38:30.317 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:30.317 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.317 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.318 delay0 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.318 06:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:30.578 [2024-11-20 06:49:50.309865] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:37.157 Initializing NVMe Controllers 00:38:37.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:37.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:37.157 Initialization complete. Launching workers. 00:38:37.157 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4484 00:38:37.157 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4766, failed to submit 38 00:38:37.157 success 4632, unsuccessful 134, failed 0 00:38:37.157 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:37.157 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:37.157 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:37.157 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:38:37.157 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:37.157 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:38:37.158 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:37.158 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:37.158 rmmod nvme_tcp 00:38:37.418 rmmod nvme_fabrics 00:38:37.418 rmmod nvme_keyring 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2973812 ']' 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2973812 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2973812 ']' 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2973812 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2973812 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2973812' 00:38:37.418 killing process with pid 2973812 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2973812 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2973812 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:38:37.418 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:37.419 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:37.419 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.419 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.419 06:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:39.962 00:38:39.962 real 0m34.003s 00:38:39.962 user 0m43.521s 00:38:39.962 sys 0m12.230s 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:39.962 ************************************ 00:38:39.962 END TEST nvmf_zcopy 00:38:39.962 ************************************ 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:39.962 ************************************ 00:38:39.962 START TEST nvmf_nmic 00:38:39.962 ************************************ 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:38:39.962 * Looking for test storage... 00:38:39.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:39.962 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.963 --rc genhtml_branch_coverage=1 00:38:39.963 --rc genhtml_function_coverage=1 00:38:39.963 --rc genhtml_legend=1 00:38:39.963 --rc geninfo_all_blocks=1 00:38:39.963 --rc geninfo_unexecuted_blocks=1 00:38:39.963 00:38:39.963 ' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.963 --rc genhtml_branch_coverage=1 00:38:39.963 --rc genhtml_function_coverage=1 00:38:39.963 --rc genhtml_legend=1 00:38:39.963 --rc geninfo_all_blocks=1 00:38:39.963 --rc geninfo_unexecuted_blocks=1 00:38:39.963 00:38:39.963 ' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.963 --rc genhtml_branch_coverage=1 00:38:39.963 --rc genhtml_function_coverage=1 00:38:39.963 --rc genhtml_legend=1 00:38:39.963 --rc geninfo_all_blocks=1 00:38:39.963 --rc geninfo_unexecuted_blocks=1 00:38:39.963 00:38:39.963 ' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:39.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.963 --rc genhtml_branch_coverage=1 00:38:39.963 --rc genhtml_function_coverage=1 00:38:39.963 --rc genhtml_legend=1 00:38:39.963 --rc geninfo_all_blocks=1 00:38:39.963 --rc geninfo_unexecuted_blocks=1 00:38:39.963 00:38:39.963 ' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:38:39.963 06:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:48.102 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:48.102 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:48.102 Found net devices under 0000:31:00.0: cvl_0_0 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.102 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:48.103 Found net devices under 0000:31:00.1: cvl_0_1 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.103 06:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:38:48.103 00:38:48.103 --- 10.0.0.2 ping statistics --- 00:38:48.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.103 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:38:48.103 00:38:48.103 --- 10.0.0.1 ping statistics --- 00:38:48.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.103 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2985454 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2985454 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2985454 ']' 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:48.103 06:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.103 [2024-11-20 06:50:07.324031] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:48.103 [2024-11-20 06:50:07.325187] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:48.103 [2024-11-20 06:50:07.325240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.103 [2024-11-20 06:50:07.426136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:48.103 [2024-11-20 06:50:07.481582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:48.103 [2024-11-20 06:50:07.481635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:48.103 [2024-11-20 06:50:07.481643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:48.103 [2024-11-20 06:50:07.481650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:48.103 [2024-11-20 06:50:07.481657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:48.103 [2024-11-20 06:50:07.484148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.103 [2024-11-20 06:50:07.484309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:48.103 [2024-11-20 06:50:07.484469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.103 [2024-11-20 06:50:07.484469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:48.103 [2024-11-20 06:50:07.564311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:48.103 [2024-11-20 06:50:07.564964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:48.103 [2024-11-20 06:50:07.565454] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:48.103 [2024-11-20 06:50:07.565917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:48.103 [2024-11-20 06:50:07.566017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.364 [2024-11-20 06:50:08.189337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.364 Malloc0 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.364 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.625 [2024-11-20 06:50:08.281682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:38:48.625 test case1: single bdev can't be used in multiple subsystems 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.625 [2024-11-20 06:50:08.308964] bdev.c:8311:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:38:48.625 [2024-11-20 06:50:08.308990] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:38:48.625 [2024-11-20 06:50:08.308999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.625 request: 00:38:48.625 { 00:38:48.625 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:38:48.625 "namespace": { 00:38:48.625 "bdev_name": "Malloc0", 00:38:48.625 "no_auto_visible": false 00:38:48.625 }, 00:38:48.625 "method": "nvmf_subsystem_add_ns", 00:38:48.625 "req_id": 1 00:38:48.625 } 00:38:48.625 Got JSON-RPC error response 00:38:48.625 response: 00:38:48.625 { 00:38:48.625 "code": -32602, 00:38:48.625 "message": "Invalid parameters" 00:38:48.625 } 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:38:48.625 Adding namespace failed - expected result. 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:38:48.625 test case2: host connect to nvmf target in multiple paths 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:48.625 [2024-11-20 06:50:08.321109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.625 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:48.886 06:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:38:49.456 06:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:38:49.456 06:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:38:49.456 06:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:38:49.456 06:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:38:49.456 06:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:38:51.368 06:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:38:51.368 06:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:38:51.368 06:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:38:51.368 06:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:38:51.368 06:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:38:51.368 06:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:38:51.368 06:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:38:51.646 [global] 00:38:51.646 thread=1 00:38:51.646 invalidate=1 00:38:51.646 rw=write 00:38:51.646 time_based=1 00:38:51.646 runtime=1 00:38:51.646 ioengine=libaio 00:38:51.646 direct=1 00:38:51.646 bs=4096 00:38:51.646 iodepth=1 00:38:51.646 norandommap=0 00:38:51.646 numjobs=1 00:38:51.646 00:38:51.646 verify_dump=1 00:38:51.646 verify_backlog=512 00:38:51.646 verify_state_save=0 00:38:51.646 do_verify=1 00:38:51.646 verify=crc32c-intel 00:38:51.646 [job0] 00:38:51.646 filename=/dev/nvme0n1 00:38:51.646 Could not set queue depth (nvme0n1) 00:38:51.904 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:38:51.904 fio-3.35 00:38:51.904 Starting 1 thread 00:38:53.286 00:38:53.286 job0: (groupid=0, jobs=1): err= 0: pid=2987055: Wed Nov 20 06:50:12 2024 00:38:53.286 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:38:53.286 slat (nsec): min=8750, max=26982, avg=25384.32, stdev=4035.73 00:38:53.286 clat (usec): min=774, max=41040, avg=38837.45, stdev=9217.52 00:38:53.286 lat (usec): min=801, max=41066, avg=38862.83, stdev=9217.37 00:38:53.286 clat percentiles (usec): 00:38:53.286 | 1.00th=[ 775], 5.00th=[ 775], 10.00th=[40633], 20.00th=[41157], 00:38:53.287 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:53.287 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:53.287 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:38:53.287 | 99.99th=[41157] 00:38:53.287 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:38:53.287 slat (usec): min=9, max=30837, avg=91.71, stdev=1361.48 00:38:53.287 clat (usec): min=165, max=691, avg=484.05, stdev=97.00 00:38:53.287 lat (usec): min=178, max=31347, avg=575.76, stdev=1366.36 00:38:53.287 clat percentiles (usec): 00:38:53.287 | 1.00th=[ 243], 5.00th=[ 330], 10.00th=[ 367], 20.00th=[ 400], 00:38:53.287 | 30.00th=[ 449], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 486], 00:38:53.287 | 70.00th=[ 553], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 627], 00:38:53.287 | 99.00th=[ 676], 99.50th=[ 676], 99.90th=[ 693], 99.95th=[ 693], 00:38:53.287 | 99.99th=[ 693] 00:38:53.287 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:38:53.287 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:38:53.287 lat (usec) : 250=1.32%, 500=60.08%, 750=35.03%, 1000=0.19% 00:38:53.287 lat (msec) : 50=3.39% 00:38:53.287 cpu : usr=0.97%, sys=1.25%, ctx=534, majf=0, minf=1 00:38:53.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:53.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:53.287 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:53.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:53.287 00:38:53.287 Run status group 0 (all jobs): 00:38:53.287 READ: bw=73.3KiB/s (75.0kB/s), 73.3KiB/s-73.3KiB/s (75.0kB/s-75.0kB/s), io=76.0KiB (77.8kB), run=1037-1037msec 00:38:53.287 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:38:53.287 00:38:53.287 Disk stats (read/write): 00:38:53.287 nvme0n1: ios=40/512, merge=0/0, ticks=1539/224, in_queue=1763, util=98.80% 00:38:53.287 06:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:53.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:38:53.287 06:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:53.287 06:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:38:53.287 06:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:38:53.287 06:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:53.287 rmmod nvme_tcp 00:38:53.287 rmmod nvme_fabrics 00:38:53.287 rmmod nvme_keyring 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2985454 ']' 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2985454 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2985454 ']' 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2985454 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2985454 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2985454' 00:38:53.287 killing process with pid 2985454 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2985454 00:38:53.287 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2985454 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:53.547 06:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.457 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:55.457 00:38:55.457 real 0m15.904s 00:38:55.457 user 0m32.695s 00:38:55.457 sys 0m7.454s 00:38:55.457 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:55.457 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:38:55.457 ************************************ 00:38:55.457 END TEST nvmf_nmic 00:38:55.457 ************************************ 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:55.718 ************************************ 00:38:55.718 START TEST nvmf_fio_target 00:38:55.718 ************************************ 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:38:55.718 * Looking for test storage... 00:38:55.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:55.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.718 --rc genhtml_branch_coverage=1 00:38:55.718 --rc genhtml_function_coverage=1 00:38:55.718 --rc genhtml_legend=1 00:38:55.718 --rc geninfo_all_blocks=1 00:38:55.718 --rc geninfo_unexecuted_blocks=1 00:38:55.718 00:38:55.718 ' 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:55.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.718 --rc genhtml_branch_coverage=1 00:38:55.718 --rc genhtml_function_coverage=1 00:38:55.718 --rc genhtml_legend=1 00:38:55.718 --rc geninfo_all_blocks=1 00:38:55.718 --rc geninfo_unexecuted_blocks=1 00:38:55.718 00:38:55.718 ' 00:38:55.718 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:55.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.718 --rc genhtml_branch_coverage=1 00:38:55.718 --rc genhtml_function_coverage=1 00:38:55.719 --rc genhtml_legend=1 00:38:55.719 --rc geninfo_all_blocks=1 00:38:55.719 --rc geninfo_unexecuted_blocks=1 00:38:55.719 00:38:55.719 ' 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:55.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:55.719 --rc genhtml_branch_coverage=1 00:38:55.719 --rc genhtml_function_coverage=1 00:38:55.719 --rc genhtml_legend=1 00:38:55.719 --rc geninfo_all_blocks=1 00:38:55.719 --rc geninfo_unexecuted_blocks=1 00:38:55.719 00:38:55.719 ' 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:55.719 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:55.980 06:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:04.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:04.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:04.115 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:04.116 Found net devices under 0000:31:00.0: cvl_0_0 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:04.116 Found net devices under 0000:31:00.1: cvl_0_1 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:04.116 06:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:04.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:04.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:39:04.116 00:39:04.116 --- 10.0.0.2 ping statistics --- 00:39:04.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.116 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:04.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:04.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:39:04.116 00:39:04.116 --- 10.0.0.1 ping statistics --- 00:39:04.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.116 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2991396 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2991396 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2991396 ']' 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:04.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:04.116 06:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:04.116 [2024-11-20 06:50:23.311220] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:04.116 [2024-11-20 06:50:23.312379] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:04.116 [2024-11-20 06:50:23.312431] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:04.116 [2024-11-20 06:50:23.413190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:04.116 [2024-11-20 06:50:23.466149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:04.116 [2024-11-20 06:50:23.466202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:04.116 [2024-11-20 06:50:23.466211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:04.116 [2024-11-20 06:50:23.466223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:04.116 [2024-11-20 06:50:23.466229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:04.116 [2024-11-20 06:50:23.468310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:04.116 [2024-11-20 06:50:23.468471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:04.116 [2024-11-20 06:50:23.468632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.116 [2024-11-20 06:50:23.468633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:04.116 [2024-11-20 06:50:23.547495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:04.116 [2024-11-20 06:50:23.548829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:04.116 [2024-11-20 06:50:23.549023] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:04.117 [2024-11-20 06:50:23.549312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:04.117 [2024-11-20 06:50:23.549379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:04.378 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:04.378 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:39:04.378 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:04.378 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:04.378 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:04.378 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:04.638 [2024-11-20 06:50:24.333521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:04.638 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:04.899 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:04.899 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:04.899 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:04.899 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:05.159 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:05.159 06:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:05.421 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:05.421 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:05.682 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:05.682 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:05.682 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:05.943 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:05.943 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:06.203 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:06.203 06:50:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:06.464 06:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:06.464 06:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:06.464 06:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:06.725 06:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:06.725 06:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:06.986 06:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:06.986 [2024-11-20 06:50:26.845446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:06.986 06:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:07.247 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:07.509 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:08.082 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:08.082 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:39:08.082 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:39:08.082 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:39:08.082 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:39:08.082 06:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:39:10.016 06:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:39:10.016 06:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:39:10.016 06:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:39:10.016 06:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:39:10.016 06:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:39:10.016 06:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:39:10.016 06:50:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:10.016 [global] 00:39:10.016 thread=1 00:39:10.016 invalidate=1 00:39:10.016 rw=write 00:39:10.016 time_based=1 00:39:10.016 runtime=1 00:39:10.016 ioengine=libaio 00:39:10.016 direct=1 00:39:10.016 bs=4096 00:39:10.016 iodepth=1 00:39:10.016 norandommap=0 00:39:10.016 numjobs=1 00:39:10.016 00:39:10.016 verify_dump=1 00:39:10.016 verify_backlog=512 00:39:10.016 verify_state_save=0 00:39:10.016 do_verify=1 00:39:10.016 verify=crc32c-intel 00:39:10.016 [job0] 00:39:10.016 filename=/dev/nvme0n1 00:39:10.016 [job1] 00:39:10.016 filename=/dev/nvme0n2 00:39:10.016 [job2] 00:39:10.016 filename=/dev/nvme0n3 00:39:10.016 [job3] 00:39:10.016 filename=/dev/nvme0n4 00:39:10.016 Could not set queue depth (nvme0n1) 00:39:10.016 Could not set queue depth (nvme0n2) 00:39:10.016 Could not set queue depth (nvme0n3) 00:39:10.016 Could not set queue depth (nvme0n4) 00:39:10.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:10.585 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:10.585 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:10.585 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:10.585 fio-3.35 00:39:10.585 Starting 4 threads 00:39:11.528 00:39:11.528 job0: (groupid=0, jobs=1): err= 0: pid=2992953: Wed Nov 20 06:50:31 2024 00:39:11.528 read: IOPS=17, BW=71.2KiB/s (72.9kB/s)(72.0KiB/1011msec) 00:39:11.528 slat (nsec): min=25919, max=26753, avg=26334.06, stdev=244.36 00:39:11.528 clat (usec): min=1165, max=42101, avg=39656.69, stdev=9607.80 00:39:11.528 lat (usec): min=1191, max=42127, avg=39683.02, stdev=9607.90 00:39:11.528 clat percentiles (usec): 00:39:11.528 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:39:11.528 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:39:11.528 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:11.528 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:11.528 | 99.99th=[42206] 00:39:11.528 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:39:11.528 slat (nsec): min=10128, max=56908, avg=32699.70, stdev=8248.75 00:39:11.528 clat (usec): min=150, max=871, avg=538.66, stdev=135.18 00:39:11.528 lat (usec): min=164, max=908, avg=571.36, stdev=136.56 00:39:11.528 clat percentiles (usec): 00:39:11.528 | 1.00th=[ 249], 5.00th=[ 326], 10.00th=[ 371], 20.00th=[ 412], 00:39:11.528 | 30.00th=[ 457], 40.00th=[ 502], 50.00th=[ 537], 60.00th=[ 578], 00:39:11.528 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 709], 95.00th=[ 766], 00:39:11.528 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 873], 99.95th=[ 873], 00:39:11.528 | 99.99th=[ 873] 00:39:11.528 bw ( KiB/s): min= 4096, max= 4096, per=47.75%, avg=4096.00, stdev= 0.00, samples=1 00:39:11.528 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:11.528 lat (usec) : 250=1.13%, 500=36.79%, 750=52.64%, 1000=6.04% 00:39:11.528 lat (msec) : 2=0.19%, 50=3.21% 00:39:11.528 cpu : usr=0.89%, sys=1.58%, ctx=533, majf=0, minf=1 00:39:11.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.528 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.528 job1: (groupid=0, jobs=1): err= 0: pid=2992954: Wed Nov 20 06:50:31 2024 00:39:11.528 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:11.528 slat (nsec): min=25806, max=61529, avg=27365.86, stdev=4029.44 00:39:11.528 clat (usec): min=795, max=1288, avg=1037.34, stdev=74.74 00:39:11.528 lat (usec): min=822, max=1315, avg=1064.70, stdev=74.53 00:39:11.528 clat percentiles (usec): 00:39:11.528 | 1.00th=[ 840], 5.00th=[ 906], 10.00th=[ 938], 20.00th=[ 988], 00:39:11.528 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1057], 00:39:11.528 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:39:11.528 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1287], 99.95th=[ 1287], 00:39:11.528 | 99.99th=[ 1287] 00:39:11.528 write: IOPS=676, BW=2705KiB/s (2770kB/s)(2708KiB/1001msec); 0 zone resets 00:39:11.528 slat (nsec): min=3790, max=68677, avg=30749.92, stdev=10050.53 00:39:11.528 clat (usec): min=279, max=982, avg=627.93, stdev=120.35 00:39:11.528 lat (usec): min=289, max=1016, avg=658.68, stdev=124.69 00:39:11.528 clat percentiles (usec): 00:39:11.528 | 1.00th=[ 351], 5.00th=[ 400], 10.00th=[ 465], 20.00th=[ 515], 00:39:11.528 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 676], 00:39:11.528 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 799], 00:39:11.528 | 99.00th=[ 865], 99.50th=[ 906], 99.90th=[ 979], 99.95th=[ 979], 00:39:11.528 | 99.99th=[ 979] 00:39:11.528 bw ( KiB/s): min= 4096, max= 4096, per=47.75%, avg=4096.00, stdev= 0.00, samples=1 00:39:11.528 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:11.528 lat (usec) : 500=9.42%, 750=38.44%, 1000=20.27% 00:39:11.528 lat (msec) : 2=31.88% 00:39:11.528 cpu : usr=1.60%, sys=3.80%, ctx=1190, majf=0, minf=1 00:39:11.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.528 issued rwts: total=512,677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.528 job2: (groupid=0, jobs=1): err= 0: pid=2992955: Wed Nov 20 06:50:31 2024 00:39:11.528 read: IOPS=19, BW=77.5KiB/s (79.4kB/s)(80.0KiB/1032msec) 00:39:11.528 slat (nsec): min=25359, max=26135, avg=25726.60, stdev=192.92 00:39:11.528 clat (usec): min=40918, max=41521, avg=40993.24, stdev=127.68 00:39:11.528 lat (usec): min=40943, max=41547, avg=41018.96, stdev=127.72 00:39:11.528 clat percentiles (usec): 00:39:11.528 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:11.528 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:11.528 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:11.528 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:11.528 | 99.99th=[41681] 00:39:11.528 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:39:11.528 slat (nsec): min=9838, max=79980, avg=28122.44, stdev=10740.24 00:39:11.528 clat (usec): min=126, max=811, avg=378.23, stdev=104.31 00:39:11.528 lat (usec): min=140, max=846, avg=406.35, stdev=105.46 00:39:11.528 clat percentiles (usec): 00:39:11.528 | 1.00th=[ 194], 5.00th=[ 227], 10.00th=[ 265], 20.00th=[ 302], 00:39:11.528 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 359], 60.00th=[ 383], 00:39:11.528 | 70.00th=[ 416], 80.00th=[ 453], 90.00th=[ 515], 95.00th=[ 586], 00:39:11.529 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 816], 99.95th=[ 816], 00:39:11.529 | 99.99th=[ 816] 00:39:11.529 bw ( KiB/s): min= 4096, max= 4096, per=47.75%, avg=4096.00, stdev= 0.00, samples=1 00:39:11.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:11.529 lat (usec) : 250=7.14%, 500=77.63%, 750=11.09%, 1000=0.38% 00:39:11.529 lat (msec) : 50=3.76% 00:39:11.529 cpu : usr=0.39%, sys=1.65%, ctx=533, majf=0, minf=2 00:39:11.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.529 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.529 job3: (groupid=0, jobs=1): err= 0: pid=2992956: Wed Nov 20 06:50:31 2024 00:39:11.529 read: IOPS=18, BW=75.0KiB/s (76.7kB/s)(76.0KiB/1014msec) 00:39:11.529 slat (nsec): min=23194, max=28285, avg=27682.84, stdev=1096.16 00:39:11.529 clat (usec): min=40805, max=41903, avg=41042.24, stdev=248.06 00:39:11.529 lat (usec): min=40833, max=41930, avg=41069.92, stdev=248.10 00:39:11.529 clat percentiles (usec): 00:39:11.529 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:39:11.529 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:11.529 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:39:11.529 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:11.529 | 99.99th=[41681] 00:39:11.529 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:39:11.529 slat (usec): min=10, max=2002, avg=29.24, stdev=88.29 00:39:11.529 clat (usec): min=137, max=742, avg=420.57, stdev=114.40 00:39:11.529 lat (usec): min=149, max=2320, avg=449.81, stdev=146.49 00:39:11.529 clat percentiles (usec): 00:39:11.529 | 1.00th=[ 151], 5.00th=[ 249], 10.00th=[ 293], 20.00th=[ 330], 00:39:11.529 | 30.00th=[ 359], 40.00th=[ 379], 50.00th=[ 420], 60.00th=[ 453], 00:39:11.529 | 70.00th=[ 474], 80.00th=[ 502], 90.00th=[ 570], 95.00th=[ 619], 00:39:11.529 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 742], 99.95th=[ 742], 00:39:11.529 | 99.99th=[ 742] 00:39:11.529 bw ( KiB/s): min= 4096, max= 4096, per=47.75%, avg=4096.00, stdev= 0.00, samples=1 00:39:11.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:11.529 lat (usec) : 250=4.90%, 500=71.19%, 750=20.34% 00:39:11.529 lat (msec) : 50=3.58% 00:39:11.529 cpu : usr=0.59%, sys=1.28%, ctx=533, majf=0, minf=1 00:39:11.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:11.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:11.529 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:11.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:11.529 00:39:11.529 Run status group 0 (all jobs): 00:39:11.529 READ: bw=2205KiB/s (2258kB/s), 71.2KiB/s-2046KiB/s (72.9kB/s-2095kB/s), io=2276KiB (2331kB), run=1001-1032msec 00:39:11.529 WRITE: bw=8578KiB/s (8783kB/s), 1984KiB/s-2705KiB/s (2032kB/s-2770kB/s), io=8852KiB (9064kB), run=1001-1032msec 00:39:11.529 00:39:11.529 Disk stats (read/write): 00:39:11.529 nvme0n1: ios=65/512, merge=0/0, ticks=1148/266, in_queue=1414, util=96.59% 00:39:11.529 nvme0n2: ios=506/512, merge=0/0, ticks=1250/311, in_queue=1561, util=97.44% 00:39:11.529 nvme0n3: ios=15/512, merge=0/0, ticks=616/180, in_queue=796, util=88.49% 00:39:11.529 nvme0n4: ios=71/512, merge=0/0, ticks=790/205, in_queue=995, util=97.54% 00:39:11.789 06:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:11.789 [global] 00:39:11.789 thread=1 00:39:11.789 invalidate=1 00:39:11.789 rw=randwrite 00:39:11.789 time_based=1 00:39:11.789 runtime=1 00:39:11.789 ioengine=libaio 00:39:11.789 direct=1 00:39:11.789 bs=4096 00:39:11.789 iodepth=1 00:39:11.789 norandommap=0 00:39:11.789 numjobs=1 00:39:11.789 00:39:11.789 verify_dump=1 00:39:11.789 verify_backlog=512 00:39:11.789 verify_state_save=0 00:39:11.789 do_verify=1 00:39:11.789 verify=crc32c-intel 00:39:11.789 [job0] 00:39:11.789 filename=/dev/nvme0n1 00:39:11.789 [job1] 00:39:11.789 filename=/dev/nvme0n2 00:39:11.789 [job2] 00:39:11.789 filename=/dev/nvme0n3 00:39:11.789 [job3] 00:39:11.789 filename=/dev/nvme0n4 00:39:11.789 Could not set queue depth (nvme0n1) 00:39:11.789 Could not set queue depth (nvme0n2) 00:39:11.789 Could not set queue depth (nvme0n3) 00:39:11.789 Could not set queue depth (nvme0n4) 00:39:12.049 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:12.049 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:12.049 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:12.049 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:12.049 fio-3.35 00:39:12.049 Starting 4 threads 00:39:13.435 00:39:13.435 job0: (groupid=0, jobs=1): err= 0: pid=2993477: Wed Nov 20 06:50:33 2024 00:39:13.435 read: IOPS=14, BW=59.0KiB/s (60.4kB/s)(60.0KiB/1017msec) 00:39:13.435 slat (nsec): min=25730, max=27089, avg=26281.60, stdev=343.53 00:39:13.435 clat (usec): min=41258, max=42055, avg=41909.18, stdev=187.90 00:39:13.435 lat (usec): min=41284, max=42081, avg=41935.46, stdev=188.04 00:39:13.435 clat percentiles (usec): 00:39:13.435 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:39:13.435 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:13.435 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:13.435 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:13.435 | 99.99th=[42206] 00:39:13.435 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:39:13.435 slat (usec): min=9, max=28750, avg=87.11, stdev=1269.27 00:39:13.435 clat (usec): min=276, max=1008, avg=661.51, stdev=140.77 00:39:13.435 lat (usec): min=288, max=29309, avg=748.61, stdev=1272.81 00:39:13.435 clat percentiles (usec): 00:39:13.435 | 1.00th=[ 338], 5.00th=[ 408], 10.00th=[ 474], 20.00th=[ 537], 00:39:13.435 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 676], 60.00th=[ 717], 00:39:13.435 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 832], 95.00th=[ 881], 00:39:13.435 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1012], 99.95th=[ 1012], 00:39:13.435 | 99.99th=[ 1012] 00:39:13.435 bw ( KiB/s): min= 4096, max= 4096, per=43.15%, avg=4096.00, stdev= 0.00, samples=1 00:39:13.435 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:13.435 lat (usec) : 500=15.18%, 750=55.22%, 1000=26.57% 00:39:13.435 lat (msec) : 2=0.19%, 50=2.85% 00:39:13.435 cpu : usr=1.28%, sys=1.08%, ctx=529, majf=0, minf=1 00:39:13.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.435 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:13.435 job1: (groupid=0, jobs=1): err= 0: pid=2993478: Wed Nov 20 06:50:33 2024 00:39:13.435 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:13.435 slat (nsec): min=8216, max=56722, avg=25579.91, stdev=3569.77 00:39:13.435 clat (usec): min=814, max=41451, avg=1243.53, stdev=1783.21 00:39:13.435 lat (usec): min=839, max=41476, avg=1269.11, stdev=1783.22 00:39:13.435 clat percentiles (usec): 00:39:13.435 | 1.00th=[ 898], 5.00th=[ 979], 10.00th=[ 1037], 20.00th=[ 1090], 00:39:13.435 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:39:13.435 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1319], 00:39:13.435 | 99.00th=[ 1369], 99.50th=[ 1450], 99.90th=[41681], 99.95th=[41681], 00:39:13.435 | 99.99th=[41681] 00:39:13.435 write: IOPS=625, BW=2501KiB/s (2562kB/s)(2504KiB/1001msec); 0 zone resets 00:39:13.435 slat (nsec): min=8978, max=61616, avg=29152.04, stdev=7705.01 00:39:13.435 clat (usec): min=209, max=947, avg=515.38, stdev=134.62 00:39:13.435 lat (usec): min=241, max=979, avg=544.53, stdev=136.44 00:39:13.435 clat percentiles (usec): 00:39:13.435 | 1.00th=[ 285], 5.00th=[ 334], 10.00th=[ 355], 20.00th=[ 408], 00:39:13.435 | 30.00th=[ 449], 40.00th=[ 469], 50.00th=[ 494], 60.00th=[ 519], 00:39:13.435 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 701], 95.00th=[ 791], 00:39:13.435 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 947], 99.95th=[ 947], 00:39:13.435 | 99.99th=[ 947] 00:39:13.435 bw ( KiB/s): min= 4096, max= 4096, per=43.15%, avg=4096.00, stdev= 0.00, samples=1 00:39:13.435 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:13.435 lat (usec) : 250=0.26%, 500=28.91%, 750=21.62%, 1000=7.29% 00:39:13.435 lat (msec) : 2=41.83%, 50=0.09% 00:39:13.435 cpu : usr=1.60%, sys=3.40%, ctx=1138, majf=0, minf=1 00:39:13.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.435 issued rwts: total=512,626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:13.435 job2: (groupid=0, jobs=1): err= 0: pid=2993479: Wed Nov 20 06:50:33 2024 00:39:13.435 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:13.436 slat (nsec): min=10125, max=61001, avg=27271.12, stdev=3847.38 00:39:13.436 clat (usec): min=603, max=1577, avg=985.31, stdev=97.00 00:39:13.436 lat (usec): min=630, max=1608, avg=1012.58, stdev=96.94 00:39:13.436 clat percentiles (usec): 00:39:13.436 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 873], 20.00th=[ 914], 00:39:13.436 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:39:13.436 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1139], 00:39:13.436 | 99.00th=[ 1221], 99.50th=[ 1418], 99.90th=[ 1582], 99.95th=[ 1582], 00:39:13.436 | 99.99th=[ 1582] 00:39:13.436 write: IOPS=805, BW=3221KiB/s (3298kB/s)(3224KiB/1001msec); 0 zone resets 00:39:13.436 slat (nsec): min=9156, max=66910, avg=29454.14, stdev=9990.91 00:39:13.436 clat (usec): min=117, max=1278, avg=555.80, stdev=142.08 00:39:13.436 lat (usec): min=126, max=1317, avg=585.25, stdev=146.09 00:39:13.436 clat percentiles (usec): 00:39:13.436 | 1.00th=[ 235], 5.00th=[ 297], 10.00th=[ 359], 20.00th=[ 433], 00:39:13.436 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 594], 00:39:13.436 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 734], 95.00th=[ 766], 00:39:13.436 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 1287], 99.95th=[ 1287], 00:39:13.436 | 99.99th=[ 1287] 00:39:13.436 bw ( KiB/s): min= 4096, max= 4096, per=43.15%, avg=4096.00, stdev= 0.00, samples=1 00:39:13.436 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:13.436 lat (usec) : 250=0.99%, 500=18.06%, 750=38.01%, 1000=26.86% 00:39:13.436 lat (msec) : 2=16.08% 00:39:13.436 cpu : usr=2.10%, sys=5.30%, ctx=1318, majf=0, minf=1 00:39:13.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.436 issued rwts: total=512,806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:13.436 job3: (groupid=0, jobs=1): err= 0: pid=2993480: Wed Nov 20 06:50:33 2024 00:39:13.436 read: IOPS=170, BW=680KiB/s (697kB/s)(704KiB/1035msec) 00:39:13.436 slat (nsec): min=17691, max=69747, avg=25955.82, stdev=5081.60 00:39:13.436 clat (usec): min=806, max=42088, avg=3868.59, stdev=10333.61 00:39:13.436 lat (usec): min=833, max=42115, avg=3894.55, stdev=10333.27 00:39:13.436 clat percentiles (usec): 00:39:13.436 | 1.00th=[ 807], 5.00th=[ 922], 10.00th=[ 963], 20.00th=[ 1020], 00:39:13.436 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1123], 00:39:13.436 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[41681], 00:39:13.436 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:13.436 | 99.99th=[42206] 00:39:13.436 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:39:13.436 slat (nsec): min=9776, max=54014, avg=30076.34, stdev=9424.26 00:39:13.436 clat (usec): min=243, max=1026, avg=641.40, stdev=132.27 00:39:13.436 lat (usec): min=260, max=1060, avg=671.48, stdev=135.56 00:39:13.436 clat percentiles (usec): 00:39:13.436 | 1.00th=[ 310], 5.00th=[ 404], 10.00th=[ 469], 20.00th=[ 523], 00:39:13.436 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:39:13.436 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 840], 00:39:13.436 | 99.00th=[ 898], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1029], 00:39:13.436 | 99.99th=[ 1029] 00:39:13.436 bw ( KiB/s): min= 4096, max= 4096, per=43.15%, avg=4096.00, stdev= 0.00, samples=1 00:39:13.436 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:13.436 lat (usec) : 250=0.29%, 500=11.48%, 750=45.78%, 1000=20.78% 00:39:13.436 lat (msec) : 2=19.91%, 50=1.74% 00:39:13.436 cpu : usr=0.87%, sys=1.93%, ctx=690, majf=0, minf=1 00:39:13.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.436 issued rwts: total=176,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:13.436 00:39:13.436 Run status group 0 (all jobs): 00:39:13.436 READ: bw=4696KiB/s (4808kB/s), 59.0KiB/s-2046KiB/s (60.4kB/s-2095kB/s), io=4860KiB (4977kB), run=1001-1035msec 00:39:13.436 WRITE: bw=9492KiB/s (9720kB/s), 1979KiB/s-3221KiB/s (2026kB/s-3298kB/s), io=9824KiB (10.1MB), run=1001-1035msec 00:39:13.436 00:39:13.436 Disk stats (read/write): 00:39:13.436 nvme0n1: ios=63/512, merge=0/0, ticks=1198/324, in_queue=1522, util=97.19% 00:39:13.436 nvme0n2: ios=492/512, merge=0/0, ticks=659/244, in_queue=903, util=92.97% 00:39:13.436 nvme0n3: ios=512/557, merge=0/0, ticks=495/254, in_queue=749, util=88.66% 00:39:13.436 nvme0n4: ios=228/512, merge=0/0, ticks=919/317, in_queue=1236, util=97.88% 00:39:13.436 06:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:13.436 [global] 00:39:13.436 thread=1 00:39:13.436 invalidate=1 00:39:13.436 rw=write 00:39:13.436 time_based=1 00:39:13.436 runtime=1 00:39:13.436 ioengine=libaio 00:39:13.436 direct=1 00:39:13.436 bs=4096 00:39:13.436 iodepth=128 00:39:13.436 norandommap=0 00:39:13.436 numjobs=1 00:39:13.436 00:39:13.436 verify_dump=1 00:39:13.436 verify_backlog=512 00:39:13.436 verify_state_save=0 00:39:13.436 do_verify=1 00:39:13.436 verify=crc32c-intel 00:39:13.436 [job0] 00:39:13.436 filename=/dev/nvme0n1 00:39:13.436 [job1] 00:39:13.436 filename=/dev/nvme0n2 00:39:13.436 [job2] 00:39:13.436 filename=/dev/nvme0n3 00:39:13.436 [job3] 00:39:13.436 filename=/dev/nvme0n4 00:39:13.436 Could not set queue depth (nvme0n1) 00:39:13.436 Could not set queue depth (nvme0n2) 00:39:13.436 Could not set queue depth (nvme0n3) 00:39:13.436 Could not set queue depth (nvme0n4) 00:39:13.696 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:13.696 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:13.696 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:13.696 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:13.696 fio-3.35 00:39:13.696 Starting 4 threads 00:39:15.086 00:39:15.086 job0: (groupid=0, jobs=1): err= 0: pid=2993989: Wed Nov 20 06:50:34 2024 00:39:15.086 read: IOPS=5666, BW=22.1MiB/s (23.2MB/s)(22.3MiB/1006msec) 00:39:15.086 slat (nsec): min=946, max=9629.2k, avg=72313.65, stdev=550118.32 00:39:15.086 clat (usec): min=3203, max=40879, avg=10070.61, stdev=5367.89 00:39:15.086 lat (usec): min=3208, max=40889, avg=10142.92, stdev=5390.77 00:39:15.086 clat percentiles (usec): 00:39:15.086 | 1.00th=[ 4424], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 6915], 00:39:15.086 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:39:15.086 | 70.00th=[10159], 80.00th=[11076], 90.00th=[13304], 95.00th=[18220], 00:39:15.086 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:39:15.086 | 99.99th=[40633] 00:39:15.086 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:39:15.086 slat (nsec): min=1703, max=40032k, avg=90220.52, stdev=789615.00 00:39:15.086 clat (usec): min=1928, max=45946, avg=11008.71, stdev=7304.92 00:39:15.086 lat (usec): min=1935, max=45961, avg=11098.93, stdev=7359.09 00:39:15.086 clat percentiles (usec): 00:39:15.086 | 1.00th=[ 3490], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 5932], 00:39:15.086 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 8029], 60.00th=[ 8848], 00:39:15.086 | 70.00th=[11600], 80.00th=[17433], 90.00th=[20055], 95.00th=[25560], 00:39:15.086 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:15.086 | 99.99th=[45876] 00:39:15.086 bw ( KiB/s): min=23584, max=25104, per=27.42%, avg=24344.00, stdev=1074.80, samples=2 00:39:15.086 iops : min= 5896, max= 6276, avg=6086.00, stdev=268.70, samples=2 00:39:15.086 lat (msec) : 2=0.05%, 4=1.81%, 10=64.47%, 20=26.48%, 50=7.20% 00:39:15.086 cpu : usr=4.38%, sys=6.67%, ctx=408, majf=0, minf=1 00:39:15.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:15.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:15.086 issued rwts: total=5701,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:15.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:15.086 job1: (groupid=0, jobs=1): err= 0: pid=2993990: Wed Nov 20 06:50:34 2024 00:39:15.086 read: IOPS=6836, BW=26.7MiB/s (28.0MB/s)(26.9MiB/1006msec) 00:39:15.086 slat (nsec): min=982, max=17155k, avg=67570.19, stdev=551762.97 00:39:15.086 clat (usec): min=1484, max=24323, avg=9593.82, stdev=3884.00 00:39:15.086 lat (usec): min=1502, max=28639, avg=9661.39, stdev=3908.24 00:39:15.086 clat percentiles (usec): 00:39:15.086 | 1.00th=[ 3458], 5.00th=[ 4686], 10.00th=[ 5932], 20.00th=[ 6849], 00:39:15.086 | 30.00th=[ 7242], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9503], 00:39:15.086 | 70.00th=[11207], 80.00th=[12125], 90.00th=[14877], 95.00th=[18482], 00:39:15.086 | 99.00th=[21627], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:39:15.086 | 99.99th=[24249] 00:39:15.086 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:39:15.086 slat (nsec): min=1550, max=11132k, avg=60878.69, stdev=476062.43 00:39:15.086 clat (usec): min=859, max=60282, avg=8256.36, stdev=5673.72 00:39:15.086 lat (usec): min=867, max=60292, avg=8317.24, stdev=5722.74 00:39:15.086 clat percentiles (usec): 00:39:15.086 | 1.00th=[ 2311], 5.00th=[ 3752], 10.00th=[ 4359], 20.00th=[ 5342], 00:39:15.086 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 7046], 60.00th=[ 7504], 00:39:15.086 | 70.00th=[ 8979], 80.00th=[ 9896], 90.00th=[12387], 95.00th=[14353], 00:39:15.086 | 99.00th=[37487], 99.50th=[49546], 99.90th=[58459], 99.95th=[60031], 00:39:15.086 | 99.99th=[60031] 00:39:15.086 bw ( KiB/s): min=26968, max=30376, per=32.30%, avg=28672.00, stdev=2409.82, samples=2 00:39:15.086 iops : min= 6742, max= 7594, avg=7168.00, stdev=602.45, samples=2 00:39:15.086 lat (usec) : 1000=0.05% 00:39:15.086 lat (msec) : 2=0.48%, 4=3.58%, 10=67.95%, 20=25.59%, 50=2.14% 00:39:15.086 lat (msec) : 100=0.22% 00:39:15.086 cpu : usr=5.17%, sys=8.36%, ctx=395, majf=0, minf=2 00:39:15.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:39:15.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:15.086 issued rwts: total=6878,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:15.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:15.086 job2: (groupid=0, jobs=1): err= 0: pid=2993991: Wed Nov 20 06:50:34 2024 00:39:15.086 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.8MiB/1044msec) 00:39:15.086 slat (nsec): min=966, max=17422k, avg=143973.09, stdev=974920.70 00:39:15.086 clat (usec): min=4577, max=69179, avg=21012.37, stdev=12777.07 00:39:15.086 lat (usec): min=4581, max=69184, avg=21156.35, stdev=12848.30 00:39:15.086 clat percentiles (usec): 00:39:15.086 | 1.00th=[ 4621], 5.00th=[ 7177], 10.00th=[ 7963], 20.00th=[10159], 00:39:15.086 | 30.00th=[12256], 40.00th=[14222], 50.00th=[16909], 60.00th=[22414], 00:39:15.086 | 70.00th=[26608], 80.00th=[30802], 90.00th=[38536], 95.00th=[45876], 00:39:15.086 | 99.00th=[60031], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:39:15.086 | 99.99th=[68682] 00:39:15.086 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:39:15.086 slat (nsec): min=1744, max=9076.2k, avg=128523.64, stdev=756101.29 00:39:15.086 clat (usec): min=3572, max=65380, avg=16157.40, stdev=11232.03 00:39:15.086 lat (usec): min=3582, max=65389, avg=16285.93, stdev=11321.63 00:39:15.086 clat percentiles (usec): 00:39:15.086 | 1.00th=[ 4178], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 8225], 00:39:15.086 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[12125], 60.00th=[15270], 00:39:15.086 | 70.00th=[19268], 80.00th=[22676], 90.00th=[25297], 95.00th=[39060], 00:39:15.086 | 99.00th=[62653], 99.50th=[63701], 99.90th=[65274], 99.95th=[65274], 00:39:15.086 | 99.99th=[65274] 00:39:15.086 bw ( KiB/s): min=12288, max=16384, per=16.15%, avg=14336.00, stdev=2896.31, samples=2 00:39:15.086 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:39:15.086 lat (msec) : 4=0.49%, 10=25.39%, 20=39.17%, 50=31.25%, 100=3.70% 00:39:15.086 cpu : usr=2.68%, sys=3.84%, ctx=284, majf=0, minf=1 00:39:15.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:15.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:15.086 issued rwts: total=3521,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:15.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:15.087 job3: (groupid=0, jobs=1): err= 0: pid=2993992: Wed Nov 20 06:50:34 2024 00:39:15.087 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:39:15.087 slat (nsec): min=943, max=16467k, avg=74194.51, stdev=624916.59 00:39:15.087 clat (usec): min=2974, max=33049, avg=10337.52, stdev=3917.47 00:39:15.087 lat (usec): min=2981, max=35838, avg=10411.71, stdev=3963.78 00:39:15.087 clat percentiles (usec): 00:39:15.087 | 1.00th=[ 3785], 5.00th=[ 5604], 10.00th=[ 6456], 20.00th=[ 7373], 00:39:15.087 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[10028], 00:39:15.087 | 70.00th=[12518], 80.00th=[13829], 90.00th=[16188], 95.00th=[16909], 00:39:15.087 | 99.00th=[20579], 99.50th=[29230], 99.90th=[33162], 99.95th=[33162], 00:39:15.087 | 99.99th=[33162] 00:39:15.087 write: IOPS=6253, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec); 0 zone resets 00:39:15.087 slat (nsec): min=1591, max=10414k, avg=69294.04, stdev=494243.34 00:39:15.087 clat (usec): min=477, max=46898, avg=10165.34, stdev=6495.84 00:39:15.087 lat (usec): min=1231, max=46901, avg=10234.63, stdev=6533.76 00:39:15.087 clat percentiles (usec): 00:39:15.087 | 1.00th=[ 2671], 5.00th=[ 4015], 10.00th=[ 4686], 20.00th=[ 5800], 00:39:15.087 | 30.00th=[ 6783], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8979], 00:39:15.087 | 70.00th=[10552], 80.00th=[12649], 90.00th=[18482], 95.00th=[26084], 00:39:15.087 | 99.00th=[34341], 99.50th=[36439], 99.90th=[41157], 99.95th=[42730], 00:39:15.087 | 99.99th=[46924] 00:39:15.087 bw ( KiB/s): min=24024, max=25136, per=27.69%, avg=24580.00, stdev=786.30, samples=2 00:39:15.087 iops : min= 6006, max= 6284, avg=6145.00, stdev=196.58, samples=2 00:39:15.087 lat (usec) : 500=0.01% 00:39:15.087 lat (msec) : 2=0.18%, 4=2.86%, 10=59.40%, 20=32.73%, 50=4.82% 00:39:15.087 cpu : usr=4.89%, sys=6.29%, ctx=470, majf=0, minf=1 00:39:15.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:15.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:15.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:15.087 issued rwts: total=6144,6272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:15.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:15.087 00:39:15.087 Run status group 0 (all jobs): 00:39:15.087 READ: bw=83.2MiB/s (87.3MB/s), 13.2MiB/s-26.7MiB/s (13.8MB/s-28.0MB/s), io=86.9MiB (91.1MB), run=1003-1044msec 00:39:15.087 WRITE: bw=86.7MiB/s (90.9MB/s), 13.4MiB/s-27.8MiB/s (14.1MB/s-29.2MB/s), io=90.5MiB (94.9MB), run=1003-1044msec 00:39:15.087 00:39:15.087 Disk stats (read/write): 00:39:15.087 nvme0n1: ios=5147/5518, merge=0/0, ticks=45118/49107, in_queue=94225, util=100.00% 00:39:15.087 nvme0n2: ios=5681/5944, merge=0/0, ticks=49919/40585, in_queue=90504, util=89.71% 00:39:15.087 nvme0n3: ios=2940/3072, merge=0/0, ticks=18922/18242, in_queue=37164, util=95.69% 00:39:15.087 nvme0n4: ios=5145/5120, merge=0/0, ticks=50385/46978, in_queue=97363, util=96.28% 00:39:15.087 06:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:15.087 [global] 00:39:15.087 thread=1 00:39:15.087 invalidate=1 00:39:15.087 rw=randwrite 00:39:15.087 time_based=1 00:39:15.087 runtime=1 00:39:15.087 ioengine=libaio 00:39:15.087 direct=1 00:39:15.087 bs=4096 00:39:15.087 iodepth=128 00:39:15.087 norandommap=0 00:39:15.087 numjobs=1 00:39:15.087 00:39:15.087 verify_dump=1 00:39:15.087 verify_backlog=512 00:39:15.087 verify_state_save=0 00:39:15.087 do_verify=1 00:39:15.087 verify=crc32c-intel 00:39:15.087 [job0] 00:39:15.087 filename=/dev/nvme0n1 00:39:15.087 [job1] 00:39:15.087 filename=/dev/nvme0n2 00:39:15.087 [job2] 00:39:15.087 filename=/dev/nvme0n3 00:39:15.087 [job3] 00:39:15.087 filename=/dev/nvme0n4 00:39:15.087 Could not set queue depth (nvme0n1) 00:39:15.087 Could not set queue depth (nvme0n2) 00:39:15.087 Could not set queue depth (nvme0n3) 00:39:15.087 Could not set queue depth (nvme0n4) 00:39:15.377 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:15.377 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:15.377 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:15.377 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:15.377 fio-3.35 00:39:15.377 Starting 4 threads 00:39:16.878 00:39:16.878 job0: (groupid=0, jobs=1): err= 0: pid=2994513: Wed Nov 20 06:50:36 2024 00:39:16.878 read: IOPS=6558, BW=25.6MiB/s (26.9MB/s)(25.8MiB/1006msec) 00:39:16.878 slat (nsec): min=916, max=14355k, avg=75967.34, stdev=588487.00 00:39:16.878 clat (usec): min=3258, max=35485, avg=10094.98, stdev=4901.44 00:39:16.878 lat (usec): min=3267, max=35493, avg=10170.95, stdev=4947.68 00:39:16.878 clat percentiles (usec): 00:39:16.878 | 1.00th=[ 4621], 5.00th=[ 6587], 10.00th=[ 7046], 20.00th=[ 7373], 00:39:16.878 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8848], 00:39:16.878 | 70.00th=[ 9896], 80.00th=[11863], 90.00th=[15795], 95.00th=[20317], 00:39:16.878 | 99.00th=[32375], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:39:16.878 | 99.99th=[35390] 00:39:16.878 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:39:16.878 slat (nsec): min=1522, max=9925.2k, avg=65348.87, stdev=459365.97 00:39:16.878 clat (usec): min=1078, max=41781, avg=9175.23, stdev=5782.25 00:39:16.878 lat (usec): min=1086, max=41787, avg=9240.58, stdev=5822.05 00:39:16.878 clat percentiles (usec): 00:39:16.878 | 1.00th=[ 3490], 5.00th=[ 4490], 10.00th=[ 5604], 20.00th=[ 6456], 00:39:16.878 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7832], 00:39:16.878 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[14615], 95.00th=[20579], 00:39:16.878 | 99.00th=[34866], 99.50th=[36963], 99.90th=[40109], 99.95th=[41681], 00:39:16.878 | 99.99th=[41681] 00:39:16.878 bw ( KiB/s): min=24576, max=28672, per=28.75%, avg=26624.00, stdev=2896.31, samples=2 00:39:16.878 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:39:16.878 lat (msec) : 2=0.09%, 4=1.10%, 10=75.85%, 20=17.49%, 50=5.47% 00:39:16.878 cpu : usr=4.38%, sys=7.06%, ctx=455, majf=0, minf=1 00:39:16.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:39:16.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:16.878 issued rwts: total=6598,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:16.878 job1: (groupid=0, jobs=1): err= 0: pid=2994514: Wed Nov 20 06:50:36 2024 00:39:16.878 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:39:16.878 slat (nsec): min=939, max=12621k, avg=87390.71, stdev=668737.03 00:39:16.878 clat (usec): min=2944, max=32803, avg=12080.03, stdev=5377.96 00:39:16.878 lat (usec): min=2947, max=32828, avg=12167.42, stdev=5427.66 00:39:16.878 clat percentiles (usec): 00:39:16.878 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 5800], 20.00th=[ 8029], 00:39:16.878 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[11076], 60.00th=[11994], 00:39:16.878 | 70.00th=[13173], 80.00th=[15401], 90.00th=[19268], 95.00th=[23462], 00:39:16.878 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:39:16.878 | 99.99th=[32900] 00:39:16.878 write: IOPS=5120, BW=20.0MiB/s (21.0MB/s)(20.2MiB/1008msec); 0 zone resets 00:39:16.878 slat (nsec): min=1504, max=17265k, avg=97144.96, stdev=777876.76 00:39:16.878 clat (usec): min=1732, max=56098, avg=12748.32, stdev=8548.15 00:39:16.878 lat (usec): min=1738, max=56107, avg=12845.46, stdev=8627.56 00:39:16.878 clat percentiles (usec): 00:39:16.878 | 1.00th=[ 3326], 5.00th=[ 4293], 10.00th=[ 5145], 20.00th=[ 5800], 00:39:16.878 | 30.00th=[ 7832], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10159], 00:39:16.878 | 70.00th=[14484], 80.00th=[19792], 90.00th=[23200], 95.00th=[28705], 00:39:16.878 | 99.00th=[48497], 99.50th=[50594], 99.90th=[55837], 99.95th=[55837], 00:39:16.878 | 99.99th=[55837] 00:39:16.878 bw ( KiB/s): min=18192, max=22768, per=22.12%, avg=20480.00, stdev=3235.72, samples=2 00:39:16.878 iops : min= 4548, max= 5692, avg=5120.00, stdev=808.93, samples=2 00:39:16.878 lat (msec) : 2=0.18%, 4=0.99%, 10=46.39%, 20=38.11%, 50=14.04% 00:39:16.878 lat (msec) : 100=0.30% 00:39:16.878 cpu : usr=3.67%, sys=5.76%, ctx=234, majf=0, minf=1 00:39:16.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:16.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:16.878 issued rwts: total=5120,5161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:16.878 job2: (groupid=0, jobs=1): err= 0: pid=2994516: Wed Nov 20 06:50:36 2024 00:39:16.878 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:39:16.878 slat (nsec): min=948, max=21121k, avg=79177.00, stdev=645757.71 00:39:16.878 clat (usec): min=1411, max=60592, avg=10844.01, stdev=7343.78 00:39:16.878 lat (usec): min=1418, max=60618, avg=10923.18, stdev=7391.70 00:39:16.878 clat percentiles (usec): 00:39:16.878 | 1.00th=[ 3621], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 6849], 00:39:16.878 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8586], 60.00th=[ 9503], 00:39:16.878 | 70.00th=[10552], 80.00th=[12518], 90.00th=[17433], 95.00th=[25297], 00:39:16.878 | 99.00th=[43254], 99.50th=[53216], 99.90th=[60031], 99.95th=[60031], 00:39:16.878 | 99.99th=[60556] 00:39:16.878 write: IOPS=6869, BW=26.8MiB/s (28.1MB/s)(27.0MiB/1006msec); 0 zone resets 00:39:16.878 slat (nsec): min=1604, max=11988k, avg=59476.43, stdev=457872.25 00:39:16.878 clat (usec): min=647, max=33088, avg=7823.10, stdev=3571.07 00:39:16.878 lat (usec): min=656, max=33100, avg=7882.57, stdev=3597.34 00:39:16.878 clat percentiles (usec): 00:39:16.878 | 1.00th=[ 3097], 5.00th=[ 4146], 10.00th=[ 4752], 20.00th=[ 5800], 00:39:16.878 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 7439], 00:39:16.878 | 70.00th=[ 7832], 80.00th=[ 9241], 90.00th=[11076], 95.00th=[13960], 00:39:16.878 | 99.00th=[24249], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:39:16.878 | 99.99th=[33162] 00:39:16.878 bw ( KiB/s): min=25600, max=28672, per=29.30%, avg=27136.00, stdev=2172.23, samples=2 00:39:16.878 iops : min= 6400, max= 7168, avg=6784.00, stdev=543.06, samples=2 00:39:16.878 lat (usec) : 750=0.02%, 1000=0.02% 00:39:16.878 lat (msec) : 2=0.21%, 4=1.89%, 10=72.58%, 20=20.51%, 50=4.44% 00:39:16.878 lat (msec) : 100=0.32% 00:39:16.878 cpu : usr=4.38%, sys=6.47%, ctx=519, majf=0, minf=2 00:39:16.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:39:16.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:16.879 issued rwts: total=6656,6911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:16.879 job3: (groupid=0, jobs=1): err= 0: pid=2994517: Wed Nov 20 06:50:36 2024 00:39:16.879 read: IOPS=4309, BW=16.8MiB/s (17.7MB/s)(17.0MiB/1007msec) 00:39:16.879 slat (nsec): min=973, max=10601k, avg=91860.73, stdev=662796.76 00:39:16.879 clat (usec): min=1786, max=37489, avg=11566.37, stdev=5644.85 00:39:16.879 lat (usec): min=1827, max=38096, avg=11658.23, stdev=5690.75 00:39:16.879 clat percentiles (usec): 00:39:16.879 | 1.00th=[ 4424], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7963], 00:39:16.879 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10552], 00:39:16.879 | 70.00th=[11994], 80.00th=[13173], 90.00th=[17433], 95.00th=[24249], 00:39:16.879 | 99.00th=[34341], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:39:16.879 | 99.99th=[37487] 00:39:16.879 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:39:16.879 slat (nsec): min=1610, max=11803k, avg=122878.10, stdev=698888.03 00:39:16.879 clat (usec): min=627, max=67148, avg=16764.20, stdev=12888.04 00:39:16.879 lat (usec): min=751, max=67160, avg=16887.08, stdev=12968.86 00:39:16.879 clat percentiles (usec): 00:39:16.879 | 1.00th=[ 1352], 5.00th=[ 4047], 10.00th=[ 5669], 20.00th=[ 7439], 00:39:16.879 | 30.00th=[ 8586], 40.00th=[10028], 50.00th=[11469], 60.00th=[15664], 00:39:16.879 | 70.00th=[20317], 80.00th=[25297], 90.00th=[31327], 95.00th=[43254], 00:39:16.879 | 99.00th=[64226], 99.50th=[65274], 99.90th=[67634], 99.95th=[67634], 00:39:16.879 | 99.99th=[67634] 00:39:16.879 bw ( KiB/s): min=16424, max=20440, per=19.90%, avg=18432.00, stdev=2839.74, samples=2 00:39:16.879 iops : min= 4106, max= 5110, avg=4608.00, stdev=709.94, samples=2 00:39:16.879 lat (usec) : 750=0.03%, 1000=0.30% 00:39:16.879 lat (msec) : 2=0.34%, 4=2.03%, 10=41.48%, 20=36.38%, 50=17.13% 00:39:16.879 lat (msec) : 100=2.30% 00:39:16.879 cpu : usr=3.08%, sys=4.87%, ctx=356, majf=0, minf=1 00:39:16.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:16.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:16.879 issued rwts: total=4340,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:16.879 00:39:16.879 Run status group 0 (all jobs): 00:39:16.879 READ: bw=88.0MiB/s (92.3MB/s), 16.8MiB/s-25.8MiB/s (17.7MB/s-27.1MB/s), io=88.7MiB (93.0MB), run=1006-1008msec 00:39:16.879 WRITE: bw=90.4MiB/s (94.8MB/s), 17.9MiB/s-26.8MiB/s (18.7MB/s-28.1MB/s), io=91.2MiB (95.6MB), run=1006-1008msec 00:39:16.879 00:39:16.879 Disk stats (read/write): 00:39:16.879 nvme0n1: ios=5170/5632, merge=0/0, ticks=30879/33205, in_queue=64084, util=92.59% 00:39:16.879 nvme0n2: ios=4050/4096, merge=0/0, ticks=31600/28832, in_queue=60432, util=92.27% 00:39:16.879 nvme0n3: ios=5671/6123, merge=0/0, ticks=38932/35846, in_queue=74778, util=98.01% 00:39:16.879 nvme0n4: ios=4128/4167, merge=0/0, ticks=38064/42702, in_queue=80766, util=97.88% 00:39:16.879 06:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:16.879 06:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2994607 00:39:16.879 06:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:16.879 06:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:16.879 [global] 00:39:16.879 thread=1 00:39:16.879 invalidate=1 00:39:16.879 rw=read 00:39:16.879 time_based=1 00:39:16.879 runtime=10 00:39:16.879 ioengine=libaio 00:39:16.879 direct=1 00:39:16.879 bs=4096 00:39:16.879 iodepth=1 00:39:16.879 norandommap=1 00:39:16.879 numjobs=1 00:39:16.879 00:39:16.879 [job0] 00:39:16.879 filename=/dev/nvme0n1 00:39:16.879 [job1] 00:39:16.879 filename=/dev/nvme0n2 00:39:16.879 [job2] 00:39:16.879 filename=/dev/nvme0n3 00:39:16.879 [job3] 00:39:16.879 filename=/dev/nvme0n4 00:39:16.879 Could not set queue depth (nvme0n1) 00:39:16.879 Could not set queue depth (nvme0n2) 00:39:16.879 Could not set queue depth (nvme0n3) 00:39:16.879 Could not set queue depth (nvme0n4) 00:39:17.166 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.166 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.166 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.166 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:17.166 fio-3.35 00:39:17.167 Starting 4 threads 00:39:19.707 06:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:19.967 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10735616, buflen=4096 00:39:19.967 fio: pid=2995015, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:19.967 06:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:20.227 06:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:20.227 06:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:20.227 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9523200, buflen=4096 00:39:20.227 fio: pid=2995009, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:20.227 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2555904, buflen=4096 00:39:20.227 fio: pid=2994987, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:20.227 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:20.227 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:20.487 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:39:20.487 fio: pid=2994995, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:20.487 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:20.487 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:20.487 00:39:20.487 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2994987: Wed Nov 20 06:50:40 2024 00:39:20.487 read: IOPS=210, BW=839KiB/s (859kB/s)(2496KiB/2974msec) 00:39:20.487 slat (usec): min=6, max=13207, avg=46.88, stdev=527.37 00:39:20.487 clat (usec): min=612, max=42029, avg=4677.03, stdev=11400.09 00:39:20.487 lat (usec): min=619, max=42055, avg=4723.95, stdev=11405.40 00:39:20.487 clat percentiles (usec): 00:39:20.487 | 1.00th=[ 775], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1020], 00:39:20.487 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1156], 00:39:20.487 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1336], 95.00th=[41157], 00:39:20.487 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:20.487 | 99.99th=[42206] 00:39:20.487 bw ( KiB/s): min= 456, max= 1472, per=11.81%, avg=841.60, stdev=422.81, samples=5 00:39:20.487 iops : min= 114, max= 368, avg=210.40, stdev=105.70, samples=5 00:39:20.487 lat (usec) : 750=0.48%, 1000=16.32% 00:39:20.487 lat (msec) : 2=74.08%, 50=8.96% 00:39:20.487 cpu : usr=0.13%, sys=0.77%, ctx=627, majf=0, minf=1 00:39:20.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.487 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.487 issued rwts: total=625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.487 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.487 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2994995: Wed Nov 20 06:50:40 2024 00:39:20.487 read: IOPS=24, BW=95.8KiB/s (98.1kB/s)(304KiB/3172msec) 00:39:20.487 slat (usec): min=26, max=6538, avg=157.47, stdev=838.26 00:39:20.487 clat (usec): min=739, max=43651, avg=41273.67, stdev=4729.87 00:39:20.487 lat (usec): min=773, max=47918, avg=41432.86, stdev=4806.31 00:39:20.487 clat percentiles (usec): 00:39:20.487 | 1.00th=[ 742], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:39:20.487 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:39:20.487 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:20.487 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:39:20.487 | 99.99th=[43779] 00:39:20.487 bw ( KiB/s): min= 95, max= 96, per=1.33%, avg=95.83, stdev= 0.41, samples=6 00:39:20.487 iops : min= 23, max= 24, avg=23.83, stdev= 0.41, samples=6 00:39:20.487 lat (usec) : 750=1.30% 00:39:20.487 lat (msec) : 50=97.40% 00:39:20.487 cpu : usr=0.00%, sys=0.16%, ctx=80, majf=0, minf=2 00:39:20.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.487 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.487 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.487 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.487 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2995009: Wed Nov 20 06:50:40 2024 00:39:20.487 read: IOPS=821, BW=3286KiB/s (3365kB/s)(9300KiB/2830msec) 00:39:20.487 slat (nsec): min=6109, max=62946, avg=25451.41, stdev=4045.29 00:39:20.487 clat (usec): min=433, max=42000, avg=1174.08, stdev=1682.99 00:39:20.487 lat (usec): min=441, max=42027, avg=1199.53, stdev=1683.46 00:39:20.487 clat percentiles (usec): 00:39:20.487 | 1.00th=[ 783], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1029], 00:39:20.487 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:39:20.487 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:39:20.487 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[41681], 99.95th=[41681], 00:39:20.487 | 99.99th=[42206] 00:39:20.487 bw ( KiB/s): min= 3424, max= 3480, per=48.44%, avg=3449.60, stdev=26.17, samples=5 00:39:20.487 iops : min= 856, max= 870, avg=862.40, stdev= 6.54, samples=5 00:39:20.487 lat (usec) : 500=0.04%, 750=0.34%, 1000=14.49% 00:39:20.487 lat (msec) : 2=84.91%, 50=0.17% 00:39:20.487 cpu : usr=0.85%, sys=2.62%, ctx=2328, majf=0, minf=2 00:39:20.487 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.487 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.487 issued rwts: total=2326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.487 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.487 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2995015: Wed Nov 20 06:50:40 2024 00:39:20.487 read: IOPS=1007, BW=4029KiB/s (4126kB/s)(10.2MiB/2602msec) 00:39:20.487 slat (nsec): min=6380, max=76149, avg=27698.37, stdev=3193.82 00:39:20.487 clat (usec): min=631, max=1286, avg=950.01, stdev=85.45 00:39:20.487 lat (usec): min=659, max=1313, avg=977.71, stdev=85.48 00:39:20.487 clat percentiles (usec): 00:39:20.487 | 1.00th=[ 717], 5.00th=[ 799], 10.00th=[ 840], 20.00th=[ 889], 00:39:20.487 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 971], 00:39:20.487 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1057], 95.00th=[ 1090], 00:39:20.487 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:39:20.487 | 99.99th=[ 1287] 00:39:20.487 bw ( KiB/s): min= 4032, max= 4096, per=57.14%, avg=4068.80, stdev=23.73, samples=5 00:39:20.487 iops : min= 1008, max= 1024, avg=1017.20, stdev= 5.93, samples=5 00:39:20.487 lat (usec) : 750=1.72%, 1000=72.96% 00:39:20.487 lat (msec) : 2=25.29% 00:39:20.487 cpu : usr=2.15%, sys=3.88%, ctx=2623, majf=0, minf=2 00:39:20.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:20.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:20.488 issued rwts: total=2622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:20.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:20.488 00:39:20.488 Run status group 0 (all jobs): 00:39:20.488 READ: bw=7120KiB/s (7291kB/s), 95.8KiB/s-4029KiB/s (98.1kB/s-4126kB/s), io=22.1MiB (23.1MB), run=2602-3172msec 00:39:20.488 00:39:20.488 Disk stats (read/write): 00:39:20.488 nvme0n1: ios=596/0, merge=0/0, ticks=2782/0, in_queue=2782, util=94.52% 00:39:20.488 nvme0n2: ios=74/0, merge=0/0, ticks=3055/0, in_queue=3055, util=95.51% 00:39:20.488 nvme0n3: ios=2234/0, merge=0/0, ticks=2468/0, in_queue=2468, util=96.04% 00:39:20.488 nvme0n4: ios=2622/0, merge=0/0, ticks=2313/0, in_queue=2313, util=96.28% 00:39:20.747 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:20.747 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:20.747 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:20.747 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:21.007 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:21.007 06:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:21.267 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:21.267 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2994607 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:21.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:21.527 nvmf hotplug test: fio failed as expected 00:39:21.527 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:21.788 rmmod nvme_tcp 00:39:21.788 rmmod nvme_fabrics 00:39:21.788 rmmod nvme_keyring 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2991396 ']' 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2991396 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2991396 ']' 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2991396 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2991396 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2991396' 00:39:21.788 killing process with pid 2991396 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2991396 00:39:21.788 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2991396 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.049 06:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:23.959 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:23.959 00:39:23.959 real 0m28.384s 00:39:23.959 user 2m11.533s 00:39:23.959 sys 0m12.215s 00:39:23.959 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:23.959 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:23.959 ************************************ 00:39:23.959 END TEST nvmf_fio_target 00:39:23.959 ************************************ 00:39:23.959 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:23.960 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:23.960 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:23.960 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:23.960 ************************************ 00:39:23.960 START TEST nvmf_bdevio 00:39:23.960 ************************************ 00:39:23.960 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:24.223 * Looking for test storage... 00:39:24.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:24.223 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:24.223 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:39:24.223 06:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:24.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.223 --rc genhtml_branch_coverage=1 00:39:24.223 --rc genhtml_function_coverage=1 00:39:24.223 --rc genhtml_legend=1 00:39:24.223 --rc geninfo_all_blocks=1 00:39:24.223 --rc geninfo_unexecuted_blocks=1 00:39:24.223 00:39:24.223 ' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:24.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.223 --rc genhtml_branch_coverage=1 00:39:24.223 --rc genhtml_function_coverage=1 00:39:24.223 --rc genhtml_legend=1 00:39:24.223 --rc geninfo_all_blocks=1 00:39:24.223 --rc geninfo_unexecuted_blocks=1 00:39:24.223 00:39:24.223 ' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:24.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.223 --rc genhtml_branch_coverage=1 00:39:24.223 --rc genhtml_function_coverage=1 00:39:24.223 --rc genhtml_legend=1 00:39:24.223 --rc geninfo_all_blocks=1 00:39:24.223 --rc geninfo_unexecuted_blocks=1 00:39:24.223 00:39:24.223 ' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:24.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.223 --rc genhtml_branch_coverage=1 00:39:24.223 --rc genhtml_function_coverage=1 00:39:24.223 --rc genhtml_legend=1 00:39:24.223 --rc geninfo_all_blocks=1 00:39:24.223 --rc geninfo_unexecuted_blocks=1 00:39:24.223 00:39:24.223 ' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:24.223 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:24.224 06:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:32.362 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:32.362 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.362 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:32.363 Found net devices under 0000:31:00.0: cvl_0_0 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:32.363 Found net devices under 0000:31:00.1: cvl_0_1 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:32.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:32.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:39:32.363 00:39:32.363 --- 10.0.0.2 ping statistics --- 00:39:32.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.363 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:32.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:32.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:39:32.363 00:39:32.363 --- 10.0.0.1 ping statistics --- 00:39:32.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:32.363 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3000027 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3000027 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3000027 ']' 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:32.363 06:50:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.363 [2024-11-20 06:50:51.749409] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:32.363 [2024-11-20 06:50:51.750572] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:32.363 [2024-11-20 06:50:51.750624] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.363 [2024-11-20 06:50:51.850457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:32.363 [2024-11-20 06:50:51.901032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:32.363 [2024-11-20 06:50:51.901082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:32.363 [2024-11-20 06:50:51.901091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:32.363 [2024-11-20 06:50:51.901097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:32.363 [2024-11-20 06:50:51.901104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:32.363 [2024-11-20 06:50:51.903462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:32.363 [2024-11-20 06:50:51.903629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:32.363 [2024-11-20 06:50:51.903806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:32.363 [2024-11-20 06:50:51.903841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:32.363 [2024-11-20 06:50:51.989151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:32.363 [2024-11-20 06:50:51.990189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:32.363 [2024-11-20 06:50:51.990572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:32.363 [2024-11-20 06:50:51.991103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:32.364 [2024-11-20 06:50:51.991140] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.935 [2024-11-20 06:50:52.613016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.935 Malloc0 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:32.935 [2024-11-20 06:50:52.713327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:32.935 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:32.935 { 00:39:32.935 "params": { 00:39:32.935 "name": "Nvme$subsystem", 00:39:32.935 "trtype": "$TEST_TRANSPORT", 00:39:32.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:32.935 "adrfam": "ipv4", 00:39:32.936 "trsvcid": "$NVMF_PORT", 00:39:32.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:32.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:32.936 "hdgst": ${hdgst:-false}, 00:39:32.936 "ddgst": ${ddgst:-false} 00:39:32.936 }, 00:39:32.936 "method": "bdev_nvme_attach_controller" 00:39:32.936 } 00:39:32.936 EOF 00:39:32.936 )") 00:39:32.936 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:39:32.936 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:39:32.936 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:39:32.936 06:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:32.936 "params": { 00:39:32.936 "name": "Nvme1", 00:39:32.936 "trtype": "tcp", 00:39:32.936 "traddr": "10.0.0.2", 00:39:32.936 "adrfam": "ipv4", 00:39:32.936 "trsvcid": "4420", 00:39:32.936 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:32.936 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:32.936 "hdgst": false, 00:39:32.936 "ddgst": false 00:39:32.936 }, 00:39:32.936 "method": "bdev_nvme_attach_controller" 00:39:32.936 }' 00:39:32.936 [2024-11-20 06:50:52.771626] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:32.936 [2024-11-20 06:50:52.771700] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000087 ] 00:39:33.196 [2024-11-20 06:50:52.865344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:33.196 [2024-11-20 06:50:52.922265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.196 [2024-11-20 06:50:52.922433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.196 [2024-11-20 06:50:52.922433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:33.196 I/O targets: 00:39:33.196 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:33.196 00:39:33.196 00:39:33.196 CUnit - A unit testing framework for C - Version 2.1-3 00:39:33.196 http://cunit.sourceforge.net/ 00:39:33.196 00:39:33.196 00:39:33.196 Suite: bdevio tests on: Nvme1n1 00:39:33.458 Test: blockdev write read block ...passed 00:39:33.458 Test: blockdev write zeroes read block ...passed 00:39:33.458 Test: blockdev write zeroes read no split ...passed 00:39:33.458 Test: blockdev write zeroes read split ...passed 00:39:33.458 Test: blockdev write zeroes read split partial ...passed 00:39:33.458 Test: blockdev reset ...[2024-11-20 06:50:53.251668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:33.458 [2024-11-20 06:50:53.251778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119f1c0 (9): Bad file descriptor 00:39:33.458 [2024-11-20 06:50:53.346413] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:33.458 passed 00:39:33.719 Test: blockdev write read 8 blocks ...passed 00:39:33.719 Test: blockdev write read size > 128k ...passed 00:39:33.719 Test: blockdev write read invalid size ...passed 00:39:33.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:33.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:33.719 Test: blockdev write read max offset ...passed 00:39:33.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:33.719 Test: blockdev writev readv 8 blocks ...passed 00:39:33.719 Test: blockdev writev readv 30 x 1block ...passed 00:39:33.719 Test: blockdev writev readv block ...passed 00:39:33.719 Test: blockdev writev readv size > 128k ...passed 00:39:33.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:33.719 Test: blockdev comparev and writev ...[2024-11-20 06:50:53.609502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.609551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:33.719 [2024-11-20 06:50:53.609568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.609577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:33.719 [2024-11-20 06:50:53.610074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.610088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:33.719 [2024-11-20 06:50:53.610104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.610114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:33.719 [2024-11-20 06:50:53.610633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.610648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:33.719 [2024-11-20 06:50:53.610663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.610671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:33.719 [2024-11-20 06:50:53.611200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.611216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:33.719 [2024-11-20 06:50:53.611232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:33.719 [2024-11-20 06:50:53.611242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:33.980 passed 00:39:33.980 Test: blockdev nvme passthru rw ...passed 00:39:33.980 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:50:53.695302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.980 [2024-11-20 06:50:53.695320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:33.980 [2024-11-20 06:50:53.695572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.980 [2024-11-20 06:50:53.695584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:33.980 [2024-11-20 06:50:53.695855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.980 [2024-11-20 06:50:53.695870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:33.980 [2024-11-20 06:50:53.696153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:33.980 [2024-11-20 06:50:53.696165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:33.980 passed 00:39:33.980 Test: blockdev nvme admin passthru ...passed 00:39:33.980 Test: blockdev copy ...passed 00:39:33.980 00:39:33.980 Run Summary: Type Total Ran Passed Failed Inactive 00:39:33.980 suites 1 1 n/a 0 0 00:39:33.980 tests 23 23 23 0 0 00:39:33.980 asserts 152 152 152 0 n/a 00:39:33.980 00:39:33.980 Elapsed time = 1.344 seconds 00:39:33.980 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:33.980 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.980 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:34.241 rmmod nvme_tcp 00:39:34.241 rmmod nvme_fabrics 00:39:34.241 rmmod nvme_keyring 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3000027 ']' 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3000027 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3000027 ']' 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3000027 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:34.241 06:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3000027 00:39:34.241 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:39:34.241 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:39:34.241 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3000027' 00:39:34.241 killing process with pid 3000027 00:39:34.241 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3000027 00:39:34.241 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3000027 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.502 06:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.416 06:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.416 00:39:36.416 real 0m12.463s 00:39:36.416 user 0m9.994s 00:39:36.416 sys 0m6.536s 00:39:36.416 06:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:36.416 06:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:36.416 ************************************ 00:39:36.416 END TEST nvmf_bdevio 00:39:36.416 ************************************ 00:39:36.677 06:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:36.677 00:39:36.677 real 5m1.965s 00:39:36.677 user 10m10.637s 00:39:36.677 sys 2m6.805s 00:39:36.677 06:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:36.677 06:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:36.677 ************************************ 00:39:36.677 END TEST nvmf_target_core_interrupt_mode 00:39:36.677 ************************************ 00:39:36.677 06:50:56 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:36.677 06:50:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:36.677 06:50:56 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:36.677 06:50:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:36.677 ************************************ 00:39:36.677 START TEST nvmf_interrupt 00:39:36.677 ************************************ 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:36.677 * Looking for test storage... 00:39:36.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.677 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:36.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.939 --rc genhtml_branch_coverage=1 00:39:36.939 --rc genhtml_function_coverage=1 00:39:36.939 --rc genhtml_legend=1 00:39:36.939 --rc geninfo_all_blocks=1 00:39:36.939 --rc geninfo_unexecuted_blocks=1 00:39:36.939 00:39:36.939 ' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:36.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.939 --rc genhtml_branch_coverage=1 00:39:36.939 --rc genhtml_function_coverage=1 00:39:36.939 --rc genhtml_legend=1 00:39:36.939 --rc geninfo_all_blocks=1 00:39:36.939 --rc geninfo_unexecuted_blocks=1 00:39:36.939 00:39:36.939 ' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:36.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.939 --rc genhtml_branch_coverage=1 00:39:36.939 --rc genhtml_function_coverage=1 00:39:36.939 --rc genhtml_legend=1 00:39:36.939 --rc geninfo_all_blocks=1 00:39:36.939 --rc geninfo_unexecuted_blocks=1 00:39:36.939 00:39:36.939 ' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:36.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.939 --rc genhtml_branch_coverage=1 00:39:36.939 --rc genhtml_function_coverage=1 00:39:36.939 --rc genhtml_legend=1 00:39:36.939 --rc geninfo_all_blocks=1 00:39:36.939 --rc geninfo_unexecuted_blocks=1 00:39:36.939 00:39:36.939 ' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:36.939 06:50:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:36.940 06:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:45.089 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:45.089 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:45.089 Found net devices under 0000:31:00.0: cvl_0_0 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:45.089 Found net devices under 0000:31:00.1: cvl_0_1 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:45.089 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:45.090 06:51:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:45.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:45.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:39:45.090 00:39:45.090 --- 10.0.0.2 ping statistics --- 00:39:45.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.090 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:45.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:45.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:39:45.090 00:39:45.090 --- 10.0.0.1 ping statistics --- 00:39:45.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:45.090 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3004707 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3004707 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3004707 ']' 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:45.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:45.090 06:51:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.090 [2024-11-20 06:51:04.399563] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:45.090 [2024-11-20 06:51:04.400703] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:45.090 [2024-11-20 06:51:04.400766] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.090 [2024-11-20 06:51:04.501070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:45.090 [2024-11-20 06:51:04.553108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:45.090 [2024-11-20 06:51:04.553158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:45.090 [2024-11-20 06:51:04.553166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:45.090 [2024-11-20 06:51:04.553173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:45.090 [2024-11-20 06:51:04.553179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:45.090 [2024-11-20 06:51:04.554993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.090 [2024-11-20 06:51:04.555124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.090 [2024-11-20 06:51:04.633304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:45.090 [2024-11-20 06:51:04.633932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:45.090 [2024-11-20 06:51:04.634235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:45.350 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:45.610 5000+0 records in 00:39:45.610 5000+0 records out 00:39:45.610 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0189681 s, 540 MB/s 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.610 AIO0 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.610 [2024-11-20 06:51:05.328088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:45.610 [2024-11-20 06:51:05.372786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3004707 0 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3004707 0 idle 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:45.610 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004707 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.31 reactor_0' 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004707 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.31 reactor_0 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3004707 1 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3004707 1 idle 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:45.871 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004753 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004753 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3004922 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3004707 0 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3004707 0 busy 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:45.872 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004707 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.33 reactor_0' 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004707 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.33 reactor_0 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:46.134 06:51:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:39:47.074 06:51:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:39:47.074 06:51:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:47.074 06:51:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:47.074 06:51:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004707 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.50 reactor_0' 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004707 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.50 reactor_0 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3004707 1 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3004707 1 busy 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:47.334 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004753 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.26 reactor_1' 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004753 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:01.26 reactor_1 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:47.595 06:51:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3004922 00:39:57.590 Initializing NVMe Controllers 00:39:57.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:57.590 Controller IO queue size 256, less than required. 00:39:57.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:57.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:57.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:57.590 Initialization complete. Launching workers. 00:39:57.590 ======================================================== 00:39:57.590 Latency(us) 00:39:57.590 Device Information : IOPS MiB/s Average min max 00:39:57.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19365.29 75.65 13224.38 3981.46 33643.34 00:39:57.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19727.59 77.06 12978.76 8008.38 29785.68 00:39:57.590 ======================================================== 00:39:57.590 Total : 39092.89 152.71 13100.43 3981.46 33643.34 00:39:57.590 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3004707 0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3004707 0 idle 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004707 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.08 reactor_0' 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004707 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.08 reactor_0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3004707 1 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3004707 1 idle 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004753 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.78 reactor_1' 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004753 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.78 reactor_1 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:57.590 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:57.591 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:57.591 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:57.591 06:51:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:57.591 06:51:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:57.591 06:51:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:39:57.591 06:51:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:39:57.591 06:51:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:39:57.591 06:51:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:39:57.591 06:51:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3004707 0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3004707 0 idle 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004707 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.46 reactor_0' 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004707 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.46 reactor_0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3004707 1 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3004707 1 idle 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3004707 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3004707 -w 256 00:39:59.505 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3004753 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.91 reactor_1' 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3004753 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.91 reactor_1 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:59.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:59.765 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:59.765 rmmod nvme_tcp 00:39:59.765 rmmod nvme_fabrics 00:39:59.765 rmmod nvme_keyring 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3004707 ']' 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3004707 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3004707 ']' 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3004707 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3004707 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3004707' 00:40:00.025 killing process with pid 3004707 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3004707 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3004707 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:00.025 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:00.026 06:51:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.573 06:51:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:02.573 00:40:02.573 real 0m25.606s 00:40:02.573 user 0m40.010s 00:40:02.573 sys 0m10.170s 00:40:02.573 06:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:02.573 06:51:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:02.573 ************************************ 00:40:02.573 END TEST nvmf_interrupt 00:40:02.573 ************************************ 00:40:02.573 00:40:02.573 real 30m15.741s 00:40:02.573 user 62m29.628s 00:40:02.573 sys 11m4.739s 00:40:02.573 06:51:22 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:02.573 06:51:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:02.573 ************************************ 00:40:02.573 END TEST nvmf_tcp 00:40:02.573 ************************************ 00:40:02.573 06:51:22 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:02.573 06:51:22 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:02.573 06:51:22 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:02.573 06:51:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:02.573 06:51:22 -- common/autotest_common.sh@10 -- # set +x 00:40:02.573 ************************************ 00:40:02.573 START TEST spdkcli_nvmf_tcp 00:40:02.573 ************************************ 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:02.573 * Looking for test storage... 00:40:02.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:02.573 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:02.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.574 --rc genhtml_branch_coverage=1 00:40:02.574 --rc genhtml_function_coverage=1 00:40:02.574 --rc genhtml_legend=1 00:40:02.574 --rc geninfo_all_blocks=1 00:40:02.574 --rc geninfo_unexecuted_blocks=1 00:40:02.574 00:40:02.574 ' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:02.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.574 --rc genhtml_branch_coverage=1 00:40:02.574 --rc genhtml_function_coverage=1 00:40:02.574 --rc genhtml_legend=1 00:40:02.574 --rc geninfo_all_blocks=1 00:40:02.574 --rc geninfo_unexecuted_blocks=1 00:40:02.574 00:40:02.574 ' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:02.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.574 --rc genhtml_branch_coverage=1 00:40:02.574 --rc genhtml_function_coverage=1 00:40:02.574 --rc genhtml_legend=1 00:40:02.574 --rc geninfo_all_blocks=1 00:40:02.574 --rc geninfo_unexecuted_blocks=1 00:40:02.574 00:40:02.574 ' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:02.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:02.574 --rc genhtml_branch_coverage=1 00:40:02.574 --rc genhtml_function_coverage=1 00:40:02.574 --rc genhtml_legend=1 00:40:02.574 --rc geninfo_all_blocks=1 00:40:02.574 --rc geninfo_unexecuted_blocks=1 00:40:02.574 00:40:02.574 ' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:02.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3008588 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3008588 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3008588 ']' 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:02.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:02.574 06:51:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:02.574 [2024-11-20 06:51:22.394922] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:40:02.574 [2024-11-20 06:51:22.395000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008588 ] 00:40:02.574 [2024-11-20 06:51:22.487649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:02.835 [2024-11-20 06:51:22.542364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:02.835 [2024-11-20 06:51:22.542371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:03.408 06:51:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:03.408 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:03.408 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:03.408 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:03.408 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:03.408 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:03.408 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:03.408 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:03.408 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:03.408 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:03.408 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:03.408 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:03.408 ' 00:40:06.022 [2024-11-20 06:51:25.937209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:07.407 [2024-11-20 06:51:27.297374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:09.953 [2024-11-20 06:51:29.832400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:12.499 [2024-11-20 06:51:32.034818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:13.882 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:13.882 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:13.882 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:13.882 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:13.882 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:13.882 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:13.882 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:13.882 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:13.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:13.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:13.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:13.882 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:13.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:13.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:13.882 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:13.882 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:13.883 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:13.883 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:14.143 06:51:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:14.405 06:51:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:14.405 06:51:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:14.405 06:51:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:14.405 06:51:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:14.405 06:51:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.666 06:51:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:14.666 06:51:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:14.666 06:51:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:14.666 06:51:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:14.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:14.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:14.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:14.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:14.666 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:14.666 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:14.666 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:14.666 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:14.666 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:14.666 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:14.666 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:14.666 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:14.666 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:14.666 ' 00:40:21.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:21.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:21.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:21.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:21.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:21.256 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:21.256 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:21.256 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:21.256 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:21.256 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:21.256 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:21.256 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:21.256 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:21.256 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:21.256 06:51:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:21.256 06:51:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:21.256 06:51:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3008588 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3008588 ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3008588 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3008588 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3008588' 00:40:21.256 killing process with pid 3008588 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3008588 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3008588 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3008588 ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3008588 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3008588 ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3008588 00:40:21.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3008588) - No such process 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3008588 is not found' 00:40:21.256 Process with pid 3008588 is not found 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:21.256 00:40:21.256 real 0m18.105s 00:40:21.256 user 0m40.217s 00:40:21.256 sys 0m0.841s 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:21.256 06:51:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:21.256 ************************************ 00:40:21.256 END TEST spdkcli_nvmf_tcp 00:40:21.256 ************************************ 00:40:21.256 06:51:40 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:21.256 06:51:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:21.256 06:51:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:21.256 06:51:40 -- common/autotest_common.sh@10 -- # set +x 00:40:21.256 ************************************ 00:40:21.256 START TEST nvmf_identify_passthru 00:40:21.256 ************************************ 00:40:21.256 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:21.256 * Looking for test storage... 00:40:21.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:21.256 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:21.256 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:40:21.256 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:21.256 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:21.256 06:51:40 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:21.257 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:21.257 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.257 --rc genhtml_branch_coverage=1 00:40:21.257 --rc genhtml_function_coverage=1 00:40:21.257 --rc genhtml_legend=1 00:40:21.257 --rc geninfo_all_blocks=1 00:40:21.257 --rc geninfo_unexecuted_blocks=1 00:40:21.257 00:40:21.257 ' 00:40:21.257 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.257 --rc genhtml_branch_coverage=1 00:40:21.257 --rc genhtml_function_coverage=1 00:40:21.257 --rc genhtml_legend=1 00:40:21.257 --rc geninfo_all_blocks=1 00:40:21.257 --rc geninfo_unexecuted_blocks=1 00:40:21.257 00:40:21.257 ' 00:40:21.257 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.257 --rc genhtml_branch_coverage=1 00:40:21.257 --rc genhtml_function_coverage=1 00:40:21.257 --rc genhtml_legend=1 00:40:21.257 --rc geninfo_all_blocks=1 00:40:21.257 --rc geninfo_unexecuted_blocks=1 00:40:21.257 00:40:21.257 ' 00:40:21.257 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.257 --rc genhtml_branch_coverage=1 00:40:21.257 --rc genhtml_function_coverage=1 00:40:21.257 --rc genhtml_legend=1 00:40:21.257 --rc geninfo_all_blocks=1 00:40:21.257 --rc geninfo_unexecuted_blocks=1 00:40:21.257 00:40:21.257 ' 00:40:21.257 06:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:21.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:21.257 06:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:21.257 06:51:40 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:21.257 06:51:40 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.257 06:51:40 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:21.257 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:21.257 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:21.258 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:21.258 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:21.258 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:21.258 06:51:40 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:21.258 06:51:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:29.398 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:29.399 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:29.399 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:29.399 Found net devices under 0000:31:00.0: cvl_0_0 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:29.399 Found net devices under 0000:31:00.1: cvl_0_1 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:29.399 06:51:47 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:29.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:29.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:40:29.399 00:40:29.399 --- 10.0.0.2 ping statistics --- 00:40:29.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.399 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:29.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:29.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:40:29.399 00:40:29.399 --- 10.0.0.1 ping statistics --- 00:40:29.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.399 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:29.399 06:51:48 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:40:29.399 06:51:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:29.399 06:51:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:29.399 06:51:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:40:29.399 06:51:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:29.399 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:29.399 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.660 06:51:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.660 06:51:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3015917 00:40:29.660 06:51:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:29.660 06:51:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:29.660 06:51:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3015917 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3015917 ']' 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:29.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:29.660 06:51:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:29.660 [2024-11-20 06:51:49.414450] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:40:29.660 [2024-11-20 06:51:49.414522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:29.660 [2024-11-20 06:51:49.517716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:29.660 [2024-11-20 06:51:49.571800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:29.660 [2024-11-20 06:51:49.571878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:29.660 [2024-11-20 06:51:49.571887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:29.660 [2024-11-20 06:51:49.571895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:29.660 [2024-11-20 06:51:49.571906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:29.660 [2024-11-20 06:51:49.574032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:29.660 [2024-11-20 06:51:49.574191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:29.660 [2024-11-20 06:51:49.574352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:29.660 [2024-11-20 06:51:49.574353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:40:30.605 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.605 INFO: Log level set to 20 00:40:30.605 INFO: Requests: 00:40:30.605 { 00:40:30.605 "jsonrpc": "2.0", 00:40:30.605 "method": "nvmf_set_config", 00:40:30.605 "id": 1, 00:40:30.605 "params": { 00:40:30.605 "admin_cmd_passthru": { 00:40:30.605 "identify_ctrlr": true 00:40:30.605 } 00:40:30.605 } 00:40:30.605 } 00:40:30.605 00:40:30.605 INFO: response: 00:40:30.605 { 00:40:30.605 "jsonrpc": "2.0", 00:40:30.605 "id": 1, 00:40:30.605 "result": true 00:40:30.605 } 00:40:30.605 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.605 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.605 INFO: Setting log level to 20 00:40:30.605 INFO: Setting log level to 20 00:40:30.605 INFO: Log level set to 20 00:40:30.605 INFO: Log level set to 20 00:40:30.605 INFO: Requests: 00:40:30.605 { 00:40:30.605 "jsonrpc": "2.0", 00:40:30.605 "method": "framework_start_init", 00:40:30.605 "id": 1 00:40:30.605 } 00:40:30.605 00:40:30.605 INFO: Requests: 00:40:30.605 { 00:40:30.605 "jsonrpc": "2.0", 00:40:30.605 "method": "framework_start_init", 00:40:30.605 "id": 1 00:40:30.605 } 00:40:30.605 00:40:30.605 [2024-11-20 06:51:50.330164] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:30.605 INFO: response: 00:40:30.605 { 00:40:30.605 "jsonrpc": "2.0", 00:40:30.605 "id": 1, 00:40:30.605 "result": true 00:40:30.605 } 00:40:30.605 00:40:30.605 INFO: response: 00:40:30.605 { 00:40:30.605 "jsonrpc": "2.0", 00:40:30.605 "id": 1, 00:40:30.605 "result": true 00:40:30.605 } 00:40:30.605 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.605 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.605 INFO: Setting log level to 40 00:40:30.605 INFO: Setting log level to 40 00:40:30.605 INFO: Setting log level to 40 00:40:30.605 [2024-11-20 06:51:50.343743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.605 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.605 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.605 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.866 Nvme0n1 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.866 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.866 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.866 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.866 [2024-11-20 06:51:50.747757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.866 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:30.866 [ 00:40:30.866 { 00:40:30.866 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:30.866 "subtype": "Discovery", 00:40:30.866 "listen_addresses": [], 00:40:30.866 "allow_any_host": true, 00:40:30.866 "hosts": [] 00:40:30.866 }, 00:40:30.866 { 00:40:30.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:30.866 "subtype": "NVMe", 00:40:30.866 "listen_addresses": [ 00:40:30.866 { 00:40:30.866 "trtype": "TCP", 00:40:30.866 "adrfam": "IPv4", 00:40:30.866 "traddr": "10.0.0.2", 00:40:30.866 "trsvcid": "4420" 00:40:30.866 } 00:40:30.866 ], 00:40:30.866 "allow_any_host": true, 00:40:30.866 "hosts": [], 00:40:30.866 "serial_number": "SPDK00000000000001", 00:40:30.866 "model_number": "SPDK bdev Controller", 00:40:30.866 "max_namespaces": 1, 00:40:30.866 "min_cntlid": 1, 00:40:30.866 "max_cntlid": 65519, 00:40:30.866 "namespaces": [ 00:40:30.866 { 00:40:30.866 "nsid": 1, 00:40:30.866 "bdev_name": "Nvme0n1", 00:40:30.866 "name": "Nvme0n1", 00:40:30.866 "nguid": "36344730526055000025384500000031", 00:40:30.866 "uuid": "36344730-5260-5500-0025-384500000031" 00:40:30.866 } 00:40:30.866 ] 00:40:30.866 } 00:40:30.866 ] 00:40:30.866 06:51:50 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:30.866 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:30.866 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:30.866 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:31.128 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:40:31.128 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:31.128 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:31.128 06:51:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:31.388 06:51:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:40:31.388 06:51:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:40:31.388 06:51:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:40:31.388 06:51:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:31.388 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:31.388 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:31.388 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:31.388 06:51:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:31.388 06:51:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:31.388 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:31.388 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:31.388 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:31.388 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:31.388 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:31.388 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:31.388 rmmod nvme_tcp 00:40:31.388 rmmod nvme_fabrics 00:40:31.649 rmmod nvme_keyring 00:40:31.649 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:31.649 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:31.649 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:31.649 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3015917 ']' 00:40:31.649 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3015917 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3015917 ']' 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3015917 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3015917 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3015917' 00:40:31.649 killing process with pid 3015917 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3015917 00:40:31.649 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3015917 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:31.909 06:51:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:31.909 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:31.909 06:51:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.458 06:51:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:34.458 00:40:34.458 real 0m13.545s 00:40:34.458 user 0m10.695s 00:40:34.458 sys 0m7.025s 00:40:34.458 06:51:53 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:34.458 06:51:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:34.458 ************************************ 00:40:34.458 END TEST nvmf_identify_passthru 00:40:34.458 ************************************ 00:40:34.458 06:51:53 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:34.458 06:51:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:34.458 06:51:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:34.458 06:51:53 -- common/autotest_common.sh@10 -- # set +x 00:40:34.458 ************************************ 00:40:34.458 START TEST nvmf_dif 00:40:34.458 ************************************ 00:40:34.458 06:51:53 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:34.458 * Looking for test storage... 00:40:34.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:34.458 06:51:53 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:34.458 06:51:53 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:40:34.458 06:51:53 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:34.458 06:51:54 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:34.458 06:51:54 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:34.459 06:51:54 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.459 06:51:54 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.459 --rc genhtml_branch_coverage=1 00:40:34.459 --rc genhtml_function_coverage=1 00:40:34.459 --rc genhtml_legend=1 00:40:34.459 --rc geninfo_all_blocks=1 00:40:34.459 --rc geninfo_unexecuted_blocks=1 00:40:34.459 00:40:34.459 ' 00:40:34.459 06:51:54 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.459 --rc genhtml_branch_coverage=1 00:40:34.459 --rc genhtml_function_coverage=1 00:40:34.459 --rc genhtml_legend=1 00:40:34.459 --rc geninfo_all_blocks=1 00:40:34.459 --rc geninfo_unexecuted_blocks=1 00:40:34.459 00:40:34.459 ' 00:40:34.459 06:51:54 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.459 --rc genhtml_branch_coverage=1 00:40:34.459 --rc genhtml_function_coverage=1 00:40:34.459 --rc genhtml_legend=1 00:40:34.459 --rc geninfo_all_blocks=1 00:40:34.459 --rc geninfo_unexecuted_blocks=1 00:40:34.459 00:40:34.459 ' 00:40:34.459 06:51:54 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:34.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.459 --rc genhtml_branch_coverage=1 00:40:34.459 --rc genhtml_function_coverage=1 00:40:34.459 --rc genhtml_legend=1 00:40:34.459 --rc geninfo_all_blocks=1 00:40:34.459 --rc geninfo_unexecuted_blocks=1 00:40:34.459 00:40:34.459 ' 00:40:34.459 06:51:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.459 06:51:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.459 06:51:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.459 06:51:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.459 06:51:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.459 06:51:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:34.459 06:51:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:34.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.459 06:51:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:34.459 06:51:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:34.459 06:51:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:34.459 06:51:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:34.459 06:51:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.459 06:51:54 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:34.460 06:51:54 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:34.460 06:51:54 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:34.460 06:51:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.460 06:51:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:34.460 06:51:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.460 06:51:54 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:34.460 06:51:54 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:34.460 06:51:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:34.460 06:51:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:42.601 06:52:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:42.601 06:52:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:42.601 06:52:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:42.601 06:52:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:42.601 06:52:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:42.601 06:52:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:42.601 06:52:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:42.602 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:42.602 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:42.602 Found net devices under 0000:31:00.0: cvl_0_0 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:42.602 Found net devices under 0000:31:00.1: cvl_0_1 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:42.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:42.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:40:42.602 00:40:42.602 --- 10.0.0.2 ping statistics --- 00:40:42.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.602 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:42.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:42.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:40:42.602 00:40:42.602 --- 10.0.0.1 ping statistics --- 00:40:42.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:42.602 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:42.602 06:52:01 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:45.148 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:45.148 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:45.148 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:45.148 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:45.148 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:40:45.409 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:40:45.409 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:45.670 06:52:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:45.670 06:52:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3022057 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3022057 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3022057 ']' 00:40:45.670 06:52:05 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:45.670 06:52:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:45.931 [2024-11-20 06:52:05.623226] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:40:45.931 [2024-11-20 06:52:05.623289] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:45.931 [2024-11-20 06:52:05.724785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.931 [2024-11-20 06:52:05.775614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:45.931 [2024-11-20 06:52:05.775662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:45.931 [2024-11-20 06:52:05.775671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:45.931 [2024-11-20 06:52:05.775678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:45.931 [2024-11-20 06:52:05.775685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:45.931 [2024-11-20 06:52:05.776498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:40:46.874 06:52:06 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 06:52:06 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:46.874 06:52:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:46.874 06:52:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 [2024-11-20 06:52:06.500232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.874 06:52:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:46.874 06:52:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 ************************************ 00:40:46.874 START TEST fio_dif_1_default 00:40:46.874 ************************************ 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 bdev_null0 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:46.874 [2024-11-20 06:52:06.560658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:46.874 { 00:40:46.874 "params": { 00:40:46.874 "name": "Nvme$subsystem", 00:40:46.874 "trtype": "$TEST_TRANSPORT", 00:40:46.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:46.874 "adrfam": "ipv4", 00:40:46.874 "trsvcid": "$NVMF_PORT", 00:40:46.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:46.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:46.874 "hdgst": ${hdgst:-false}, 00:40:46.874 "ddgst": ${ddgst:-false} 00:40:46.874 }, 00:40:46.874 "method": "bdev_nvme_attach_controller" 00:40:46.874 } 00:40:46.874 EOF 00:40:46.874 )") 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:46.874 "params": { 00:40:46.874 "name": "Nvme0", 00:40:46.874 "trtype": "tcp", 00:40:46.874 "traddr": "10.0.0.2", 00:40:46.874 "adrfam": "ipv4", 00:40:46.874 "trsvcid": "4420", 00:40:46.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:46.874 "hdgst": false, 00:40:46.874 "ddgst": false 00:40:46.874 }, 00:40:46.874 "method": "bdev_nvme_attach_controller" 00:40:46.874 }' 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:46.874 06:52:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:47.136 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:47.136 fio-3.35 00:40:47.136 Starting 1 thread 00:40:59.368 00:40:59.368 filename0: (groupid=0, jobs=1): err= 0: pid=3022574: Wed Nov 20 06:52:17 2024 00:40:59.368 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:40:59.368 slat (nsec): min=5518, max=34889, avg=6296.25, stdev=1734.12 00:40:59.368 clat (usec): min=40871, max=44426, avg=40997.12, stdev=221.04 00:40:59.368 lat (usec): min=40879, max=44461, avg=41003.41, stdev=221.78 00:40:59.368 clat percentiles (usec): 00:40:59.368 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:59.368 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:59.368 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:59.368 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:40:59.368 | 99.99th=[44303] 00:40:59.368 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:40:59.368 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:40:59.368 lat (msec) : 50=100.00% 00:40:59.368 cpu : usr=94.04%, sys=5.72%, ctx=13, majf=0, minf=236 00:40:59.368 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:59.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:59.368 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:59.368 latency : target=0, window=0, percentile=100.00%, depth=4 00:40:59.368 00:40:59.368 Run status group 0 (all jobs): 00:40:59.368 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10008-10008msec 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.368 00:40:59.368 real 0m11.280s 00:40:59.368 user 0m18.220s 00:40:59.368 sys 0m0.979s 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:59.368 ************************************ 00:40:59.368 END TEST fio_dif_1_default 00:40:59.368 ************************************ 00:40:59.368 06:52:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:40:59.368 06:52:17 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:40:59.368 06:52:17 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:59.368 06:52:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:59.368 ************************************ 00:40:59.368 START TEST fio_dif_1_multi_subsystems 00:40:59.368 ************************************ 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.368 bdev_null0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.368 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.369 [2024-11-20 06:52:17.886071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.369 bdev_null1 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:59.369 { 00:40:59.369 "params": { 00:40:59.369 "name": "Nvme$subsystem", 00:40:59.369 "trtype": "$TEST_TRANSPORT", 00:40:59.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:59.369 "adrfam": "ipv4", 00:40:59.369 "trsvcid": "$NVMF_PORT", 00:40:59.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:59.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:59.369 "hdgst": ${hdgst:-false}, 00:40:59.369 "ddgst": ${ddgst:-false} 00:40:59.369 }, 00:40:59.369 "method": "bdev_nvme_attach_controller" 00:40:59.369 } 00:40:59.369 EOF 00:40:59.369 )") 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:59.369 { 00:40:59.369 "params": { 00:40:59.369 "name": "Nvme$subsystem", 00:40:59.369 "trtype": "$TEST_TRANSPORT", 00:40:59.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:59.369 "adrfam": "ipv4", 00:40:59.369 "trsvcid": "$NVMF_PORT", 00:40:59.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:59.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:59.369 "hdgst": ${hdgst:-false}, 00:40:59.369 "ddgst": ${ddgst:-false} 00:40:59.369 }, 00:40:59.369 "method": "bdev_nvme_attach_controller" 00:40:59.369 } 00:40:59.369 EOF 00:40:59.369 )") 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:59.369 "params": { 00:40:59.369 "name": "Nvme0", 00:40:59.369 "trtype": "tcp", 00:40:59.369 "traddr": "10.0.0.2", 00:40:59.369 "adrfam": "ipv4", 00:40:59.369 "trsvcid": "4420", 00:40:59.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:59.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:59.369 "hdgst": false, 00:40:59.369 "ddgst": false 00:40:59.369 }, 00:40:59.369 "method": "bdev_nvme_attach_controller" 00:40:59.369 },{ 00:40:59.369 "params": { 00:40:59.369 "name": "Nvme1", 00:40:59.369 "trtype": "tcp", 00:40:59.369 "traddr": "10.0.0.2", 00:40:59.369 "adrfam": "ipv4", 00:40:59.369 "trsvcid": "4420", 00:40:59.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:59.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:59.369 "hdgst": false, 00:40:59.369 "ddgst": false 00:40:59.369 }, 00:40:59.369 "method": "bdev_nvme_attach_controller" 00:40:59.369 }' 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:40:59.369 06:52:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:59.369 06:52:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:40:59.369 06:52:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:40:59.369 06:52:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:59.369 06:52:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:59.369 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:59.369 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:59.369 fio-3.35 00:40:59.369 Starting 2 threads 00:41:09.369 00:41:09.369 filename0: (groupid=0, jobs=1): err= 0: pid=3024915: Wed Nov 20 06:52:29 2024 00:41:09.369 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:41:09.369 slat (nsec): min=5562, max=42919, avg=5862.06, stdev=1323.74 00:41:09.369 clat (usec): min=656, max=42068, avg=21083.52, stdev=20158.00 00:41:09.369 lat (usec): min=661, max=42105, avg=21089.38, stdev=20157.93 00:41:09.369 clat percentiles (usec): 00:41:09.369 | 1.00th=[ 717], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 840], 00:41:09.369 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:41:09.369 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:09.369 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:41:09.369 | 99.99th=[42206] 00:41:09.369 bw ( KiB/s): min= 672, max= 768, per=66.11%, avg=759.58, stdev=25.78, samples=19 00:41:09.369 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:41:09.369 lat (usec) : 750=1.95%, 1000=46.10% 00:41:09.369 lat (msec) : 2=1.74%, 50=50.21% 00:41:09.369 cpu : usr=95.64%, sys=4.15%, ctx=9, majf=0, minf=106 00:41:09.369 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.369 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.369 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:09.369 filename1: (groupid=0, jobs=1): err= 0: pid=3024916: Wed Nov 20 06:52:29 2024 00:41:09.369 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10020msec) 00:41:09.369 slat (nsec): min=5508, max=34905, avg=6529.60, stdev=1799.80 00:41:09.369 clat (usec): min=832, max=42033, avg=40880.08, stdev=2575.69 00:41:09.369 lat (usec): min=838, max=42039, avg=40886.61, stdev=2575.80 00:41:09.369 clat percentiles (usec): 00:41:09.369 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:09.369 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:09.369 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:41:09.369 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:09.369 | 99.99th=[42206] 00:41:09.369 bw ( KiB/s): min= 384, max= 448, per=33.97%, avg=390.40, stdev=16.74, samples=20 00:41:09.369 iops : min= 96, max= 112, avg=97.60, stdev= 4.19, samples=20 00:41:09.369 lat (usec) : 1000=0.41% 00:41:09.369 lat (msec) : 50=99.59% 00:41:09.369 cpu : usr=95.89%, sys=3.90%, ctx=28, majf=0, minf=167 00:41:09.369 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:09.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.369 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.369 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:09.369 00:41:09.369 Run status group 0 (all jobs): 00:41:09.369 READ: bw=1148KiB/s (1176kB/s), 391KiB/s-758KiB/s (401kB/s-776kB/s), io=11.2MiB (11.8MB), run=10002-10020msec 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.631 00:41:09.631 real 0m11.493s 00:41:09.631 user 0m33.485s 00:41:09.631 sys 0m1.180s 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:09.631 06:52:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:09.631 ************************************ 00:41:09.631 END TEST fio_dif_1_multi_subsystems 00:41:09.631 ************************************ 00:41:09.631 06:52:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:09.631 06:52:29 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:09.631 06:52:29 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:09.631 06:52:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:09.631 ************************************ 00:41:09.631 START TEST fio_dif_rand_params 00:41:09.631 ************************************ 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:09.631 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.632 bdev_null0 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:09.632 [2024-11-20 06:52:29.431649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:09.632 { 00:41:09.632 "params": { 00:41:09.632 "name": "Nvme$subsystem", 00:41:09.632 "trtype": "$TEST_TRANSPORT", 00:41:09.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:09.632 "adrfam": "ipv4", 00:41:09.632 "trsvcid": "$NVMF_PORT", 00:41:09.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:09.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:09.632 "hdgst": ${hdgst:-false}, 00:41:09.632 "ddgst": ${ddgst:-false} 00:41:09.632 }, 00:41:09.632 "method": "bdev_nvme_attach_controller" 00:41:09.632 } 00:41:09.632 EOF 00:41:09.632 )") 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:09.632 "params": { 00:41:09.632 "name": "Nvme0", 00:41:09.632 "trtype": "tcp", 00:41:09.632 "traddr": "10.0.0.2", 00:41:09.632 "adrfam": "ipv4", 00:41:09.632 "trsvcid": "4420", 00:41:09.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:09.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:09.632 "hdgst": false, 00:41:09.632 "ddgst": false 00:41:09.632 }, 00:41:09.632 "method": "bdev_nvme_attach_controller" 00:41:09.632 }' 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:09.632 06:52:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:10.237 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:10.237 ... 00:41:10.237 fio-3.35 00:41:10.237 Starting 3 threads 00:41:15.529 00:41:15.529 filename0: (groupid=0, jobs=1): err= 0: pid=3027217: Wed Nov 20 06:52:35 2024 00:41:15.529 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(199MiB/5046msec) 00:41:15.529 slat (nsec): min=5586, max=33320, avg=8619.62, stdev=2117.44 00:41:15.529 clat (usec): min=4160, max=88224, avg=9475.19, stdev=6529.22 00:41:15.529 lat (usec): min=4168, max=88230, avg=9483.81, stdev=6529.27 00:41:15.529 clat percentiles (usec): 00:41:15.529 | 1.00th=[ 4686], 5.00th=[ 5276], 10.00th=[ 6063], 20.00th=[ 6783], 00:41:15.529 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8979], 00:41:15.529 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11338], 95.00th=[11994], 00:41:15.529 | 99.00th=[48497], 99.50th=[49546], 99.90th=[52167], 99.95th=[88605], 00:41:15.529 | 99.99th=[88605] 00:41:15.529 bw ( KiB/s): min=30720, max=47104, per=34.40%, avg=40678.40, stdev=5605.37, samples=10 00:41:15.529 iops : min= 240, max= 368, avg=317.80, stdev=43.79, samples=10 00:41:15.529 lat (msec) : 10=72.03%, 20=25.64%, 50=1.95%, 100=0.38% 00:41:15.529 cpu : usr=92.29%, sys=6.16%, ctx=414, majf=0, minf=113 00:41:15.529 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.529 issued rwts: total=1591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:15.529 filename0: (groupid=0, jobs=1): err= 0: pid=3027218: Wed Nov 20 06:52:35 2024 00:41:15.529 read: IOPS=265, BW=33.1MiB/s (34.7MB/s)(167MiB/5032msec) 00:41:15.529 slat (nsec): min=5585, max=32444, avg=8820.87, stdev=1611.35 00:41:15.529 clat (usec): min=3441, max=89727, avg=11303.59, stdev=13447.13 00:41:15.529 lat (usec): min=3449, max=89736, avg=11312.41, stdev=13447.00 00:41:15.529 clat percentiles (usec): 00:41:15.529 | 1.00th=[ 3916], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 6063], 00:41:15.529 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7308], 00:41:15.529 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[45351], 95.00th=[47973], 00:41:15.529 | 99.00th=[49546], 99.50th=[50594], 99.90th=[89654], 99.95th=[89654], 00:41:15.529 | 99.99th=[89654] 00:41:15.529 bw ( KiB/s): min=23296, max=50944, per=28.81%, avg=34073.60, stdev=8452.27, samples=10 00:41:15.529 iops : min= 182, max= 398, avg=266.20, stdev=66.03, samples=10 00:41:15.529 lat (msec) : 4=1.20%, 10=88.23%, 50=9.97%, 100=0.60% 00:41:15.529 cpu : usr=96.10%, sys=3.66%, ctx=9, majf=0, minf=54 00:41:15.529 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.529 issued rwts: total=1334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:15.529 filename0: (groupid=0, jobs=1): err= 0: pid=3027219: Wed Nov 20 06:52:35 2024 00:41:15.529 read: IOPS=344, BW=43.0MiB/s (45.1MB/s)(217MiB/5044msec) 00:41:15.529 slat (nsec): min=5609, max=32367, avg=9043.46, stdev=2595.96 00:41:15.529 clat (usec): min=3446, max=87288, avg=8698.42, stdev=8274.78 00:41:15.529 lat (usec): min=3455, max=87294, avg=8707.46, stdev=8274.75 00:41:15.529 clat percentiles (usec): 00:41:15.529 | 1.00th=[ 4424], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5800], 00:41:15.529 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 7111], 60.00th=[ 7570], 00:41:15.529 | 70.00th=[ 8094], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[ 9896], 00:41:15.529 | 99.00th=[47973], 99.50th=[48497], 99.90th=[86508], 99.95th=[87557], 00:41:15.529 | 99.99th=[87557] 00:41:15.529 bw ( KiB/s): min=34816, max=52992, per=37.54%, avg=44390.40, stdev=5256.42, samples=10 00:41:15.529 iops : min= 272, max= 414, avg=346.80, stdev=41.07, samples=10 00:41:15.529 lat (msec) : 4=0.29%, 10=95.11%, 20=0.58%, 50=3.86%, 100=0.17% 00:41:15.529 cpu : usr=90.82%, sys=7.00%, ctx=362, majf=0, minf=93 00:41:15.529 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.529 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.529 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:15.529 00:41:15.529 Run status group 0 (all jobs): 00:41:15.529 READ: bw=115MiB/s (121MB/s), 33.1MiB/s-43.0MiB/s (34.7MB/s-45.1MB/s), io=583MiB (611MB), run=5032-5046msec 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.791 bdev_null0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.791 [2024-11-20 06:52:35.593940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.791 bdev_null1 00:41:15.791 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.792 bdev_null2 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:15.792 { 00:41:15.792 "params": { 00:41:15.792 "name": "Nvme$subsystem", 00:41:15.792 "trtype": "$TEST_TRANSPORT", 00:41:15.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.792 "adrfam": "ipv4", 00:41:15.792 "trsvcid": "$NVMF_PORT", 00:41:15.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.792 "hdgst": ${hdgst:-false}, 00:41:15.792 "ddgst": ${ddgst:-false} 00:41:15.792 }, 00:41:15.792 "method": "bdev_nvme_attach_controller" 00:41:15.792 } 00:41:15.792 EOF 00:41:15.792 )") 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:15.792 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:15.792 { 00:41:15.792 "params": { 00:41:15.792 "name": "Nvme$subsystem", 00:41:15.792 "trtype": "$TEST_TRANSPORT", 00:41:15.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:15.792 "adrfam": "ipv4", 00:41:15.792 "trsvcid": "$NVMF_PORT", 00:41:15.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:15.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:15.792 "hdgst": ${hdgst:-false}, 00:41:15.792 "ddgst": ${ddgst:-false} 00:41:15.792 }, 00:41:15.792 "method": "bdev_nvme_attach_controller" 00:41:15.792 } 00:41:15.792 EOF 00:41:15.792 )") 00:41:16.053 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:16.053 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:16.053 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:16.053 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:16.053 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:16.053 06:52:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:16.054 { 00:41:16.054 "params": { 00:41:16.054 "name": "Nvme$subsystem", 00:41:16.054 "trtype": "$TEST_TRANSPORT", 00:41:16.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:16.054 "adrfam": "ipv4", 00:41:16.054 "trsvcid": "$NVMF_PORT", 00:41:16.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:16.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:16.054 "hdgst": ${hdgst:-false}, 00:41:16.054 "ddgst": ${ddgst:-false} 00:41:16.054 }, 00:41:16.054 "method": "bdev_nvme_attach_controller" 00:41:16.054 } 00:41:16.054 EOF 00:41:16.054 )") 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:16.054 "params": { 00:41:16.054 "name": "Nvme0", 00:41:16.054 "trtype": "tcp", 00:41:16.054 "traddr": "10.0.0.2", 00:41:16.054 "adrfam": "ipv4", 00:41:16.054 "trsvcid": "4420", 00:41:16.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:16.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:16.054 "hdgst": false, 00:41:16.054 "ddgst": false 00:41:16.054 }, 00:41:16.054 "method": "bdev_nvme_attach_controller" 00:41:16.054 },{ 00:41:16.054 "params": { 00:41:16.054 "name": "Nvme1", 00:41:16.054 "trtype": "tcp", 00:41:16.054 "traddr": "10.0.0.2", 00:41:16.054 "adrfam": "ipv4", 00:41:16.054 "trsvcid": "4420", 00:41:16.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:16.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:16.054 "hdgst": false, 00:41:16.054 "ddgst": false 00:41:16.054 }, 00:41:16.054 "method": "bdev_nvme_attach_controller" 00:41:16.054 },{ 00:41:16.054 "params": { 00:41:16.054 "name": "Nvme2", 00:41:16.054 "trtype": "tcp", 00:41:16.054 "traddr": "10.0.0.2", 00:41:16.054 "adrfam": "ipv4", 00:41:16.054 "trsvcid": "4420", 00:41:16.054 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:16.054 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:16.054 "hdgst": false, 00:41:16.054 "ddgst": false 00:41:16.054 }, 00:41:16.054 "method": "bdev_nvme_attach_controller" 00:41:16.054 }' 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:16.054 06:52:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:16.315 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:16.315 ... 00:41:16.315 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:16.315 ... 00:41:16.315 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:16.315 ... 00:41:16.315 fio-3.35 00:41:16.315 Starting 24 threads 00:41:28.555 00:41:28.555 filename0: (groupid=0, jobs=1): err= 0: pid=3028501: Wed Nov 20 06:52:47 2024 00:41:28.555 read: IOPS=672, BW=2689KiB/s (2754kB/s)(26.3MiB/10019msec) 00:41:28.555 slat (usec): min=5, max=107, avg=28.72, stdev=16.74 00:41:28.555 clat (usec): min=14205, max=29384, avg=23533.03, stdev=808.40 00:41:28.555 lat (usec): min=14248, max=29429, avg=23561.75, stdev=808.82 00:41:28.555 clat percentiles (usec): 00:41:28.555 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:41:28.555 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:41:28.555 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.555 | 99.00th=[24773], 99.50th=[25035], 99.90th=[29230], 99.95th=[29230], 00:41:28.555 | 99.99th=[29492] 00:41:28.555 bw ( KiB/s): min= 2682, max= 2688, per=4.15%, avg=2687.37, stdev= 1.89, samples=19 00:41:28.555 iops : min= 670, max= 672, avg=671.79, stdev= 0.63, samples=19 00:41:28.555 lat (msec) : 20=0.58%, 50=99.42% 00:41:28.555 cpu : usr=98.38%, sys=1.17%, ctx=118, majf=0, minf=9 00:41:28.555 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:28.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.555 filename0: (groupid=0, jobs=1): err= 0: pid=3028502: Wed Nov 20 06:52:47 2024 00:41:28.555 read: IOPS=680, BW=2722KiB/s (2787kB/s)(26.6MiB/10002msec) 00:41:28.555 slat (nsec): min=5228, max=76775, avg=21452.20, stdev=11981.97 00:41:28.555 clat (usec): min=11071, max=45708, avg=23325.04, stdev=2190.52 00:41:28.555 lat (usec): min=11085, max=45729, avg=23346.49, stdev=2192.21 00:41:28.555 clat percentiles (usec): 00:41:28.555 | 1.00th=[14222], 5.00th=[21365], 10.00th=[23200], 20.00th=[23462], 00:41:28.555 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:41:28.555 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.555 | 99.00th=[24773], 99.50th=[27919], 99.90th=[45876], 99.95th=[45876], 00:41:28.555 | 99.99th=[45876] 00:41:28.555 bw ( KiB/s): min= 2560, max= 3200, per=4.20%, avg=2716.47, stdev=126.74, samples=19 00:41:28.555 iops : min= 640, max= 800, avg=679.00, stdev=31.72, samples=19 00:41:28.555 lat (msec) : 20=4.97%, 50=95.03% 00:41:28.555 cpu : usr=98.75%, sys=0.81%, ctx=128, majf=0, minf=9 00:41:28.555 IO depths : 1=4.8%, 2=10.5%, 4=23.1%, 8=53.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:41:28.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 issued rwts: total=6806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.555 filename0: (groupid=0, jobs=1): err= 0: pid=3028503: Wed Nov 20 06:52:47 2024 00:41:28.555 read: IOPS=664, BW=2658KiB/s (2722kB/s)(26.0MiB/10005msec) 00:41:28.555 slat (nsec): min=5695, max=94199, avg=13525.95, stdev=11125.75 00:41:28.555 clat (usec): min=8254, max=45122, avg=23994.79, stdev=3699.20 00:41:28.555 lat (usec): min=8260, max=45131, avg=24008.32, stdev=3699.09 00:41:28.555 clat percentiles (usec): 00:41:28.555 | 1.00th=[ 8717], 5.00th=[21627], 10.00th=[23200], 20.00th=[23462], 00:41:28.555 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.555 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[28967], 00:41:28.555 | 99.00th=[38536], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:41:28.555 | 99.99th=[45351] 00:41:28.555 bw ( KiB/s): min= 2304, max= 2744, per=4.09%, avg=2648.00, stdev=107.26, samples=19 00:41:28.555 iops : min= 576, max= 686, avg=661.89, stdev=26.80, samples=19 00:41:28.555 lat (msec) : 10=1.55%, 20=2.14%, 50=96.31% 00:41:28.555 cpu : usr=98.48%, sys=1.06%, ctx=125, majf=0, minf=9 00:41:28.555 IO depths : 1=0.6%, 2=1.9%, 4=5.7%, 8=74.9%, 16=16.9%, 32=0.0%, >=64=0.0% 00:41:28.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 complete : 0=0.0%, 4=90.1%, 8=7.5%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 issued rwts: total=6648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.555 filename0: (groupid=0, jobs=1): err= 0: pid=3028504: Wed Nov 20 06:52:47 2024 00:41:28.555 read: IOPS=676, BW=2705KiB/s (2770kB/s)(26.4MiB/10007msec) 00:41:28.555 slat (usec): min=5, max=107, avg=26.07, stdev=17.41 00:41:28.555 clat (usec): min=4007, max=30265, avg=23448.56, stdev=1692.51 00:41:28.555 lat (usec): min=4025, max=30290, avg=23474.63, stdev=1691.12 00:41:28.555 clat percentiles (usec): 00:41:28.555 | 1.00th=[13304], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:41:28.555 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:28.555 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.555 | 99.00th=[24773], 99.50th=[24773], 99.90th=[30278], 99.95th=[30278], 00:41:28.555 | 99.99th=[30278] 00:41:28.555 bw ( KiB/s): min= 2682, max= 3072, per=4.18%, avg=2707.58, stdev=88.27, samples=19 00:41:28.555 iops : min= 670, max= 768, avg=676.84, stdev=22.08, samples=19 00:41:28.555 lat (msec) : 10=0.71%, 20=0.95%, 50=98.35% 00:41:28.555 cpu : usr=98.88%, sys=0.83%, ctx=51, majf=0, minf=9 00:41:28.555 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:28.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.555 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.555 filename0: (groupid=0, jobs=1): err= 0: pid=3028505: Wed Nov 20 06:52:47 2024 00:41:28.555 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10002msec) 00:41:28.555 slat (nsec): min=5706, max=69141, avg=14963.24, stdev=10526.74 00:41:28.555 clat (usec): min=9228, max=45514, avg=23747.85, stdev=1765.37 00:41:28.555 lat (usec): min=9235, max=45531, avg=23762.81, stdev=1765.48 00:41:28.555 clat percentiles (usec): 00:41:28.555 | 1.00th=[18482], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:28.555 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.555 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:28.555 | 99.00th=[28705], 99.50th=[36439], 99.90th=[45351], 99.95th=[45351], 00:41:28.555 | 99.99th=[45351] 00:41:28.555 bw ( KiB/s): min= 2560, max= 2792, per=4.14%, avg=2680.26, stdev=48.99, samples=19 00:41:28.555 iops : min= 640, max= 698, avg=669.95, stdev=12.24, samples=19 00:41:28.556 lat (msec) : 10=0.12%, 20=1.04%, 50=98.84% 00:41:28.556 cpu : usr=98.96%, sys=0.79%, ctx=13, majf=0, minf=9 00:41:28.556 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename0: (groupid=0, jobs=1): err= 0: pid=3028506: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=675, BW=2703KiB/s (2768kB/s)(26.4MiB/10009msec) 00:41:28.556 slat (usec): min=5, max=108, avg= 9.44, stdev= 6.79 00:41:28.556 clat (usec): min=7090, max=31638, avg=23590.10, stdev=1731.84 00:41:28.556 lat (usec): min=7096, max=31645, avg=23599.54, stdev=1730.01 00:41:28.556 clat percentiles (usec): 00:41:28.556 | 1.00th=[12780], 5.00th=[22676], 10.00th=[23462], 20.00th=[23462], 00:41:28.556 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.556 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:28.556 | 99.00th=[27657], 99.50th=[28181], 99.90th=[30016], 99.95th=[31589], 00:41:28.556 | 99.99th=[31589] 00:41:28.556 bw ( KiB/s): min= 2688, max= 2949, per=4.18%, avg=2706.74, stdev=60.34, samples=19 00:41:28.556 iops : min= 672, max= 737, avg=676.63, stdev=15.02, samples=19 00:41:28.556 lat (msec) : 10=0.56%, 20=1.71%, 50=97.72% 00:41:28.556 cpu : usr=98.79%, sys=0.87%, ctx=47, majf=0, minf=9 00:41:28.556 IO depths : 1=4.3%, 2=10.2%, 4=23.6%, 8=53.7%, 16=8.2%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename0: (groupid=0, jobs=1): err= 0: pid=3028507: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:41:28.556 slat (nsec): min=5718, max=70585, avg=14469.26, stdev=8912.50 00:41:28.556 clat (usec): min=10980, max=34931, avg=23685.04, stdev=1180.79 00:41:28.556 lat (usec): min=11003, max=34948, avg=23699.51, stdev=1180.22 00:41:28.556 clat percentiles (usec): 00:41:28.556 | 1.00th=[20055], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:28.556 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.556 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:41:28.556 | 99.00th=[26870], 99.50th=[29754], 99.90th=[34866], 99.95th=[34866], 00:41:28.556 | 99.99th=[34866] 00:41:28.556 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2680.00, stdev=29.17, samples=19 00:41:28.556 iops : min= 640, max= 672, avg=669.89, stdev= 7.29, samples=19 00:41:28.556 lat (msec) : 20=1.01%, 50=98.99% 00:41:28.556 cpu : usr=98.82%, sys=0.91%, ctx=25, majf=0, minf=9 00:41:28.556 IO depths : 1=5.4%, 2=11.7%, 4=24.9%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename0: (groupid=0, jobs=1): err= 0: pid=3028508: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.5MiB/10011msec) 00:41:28.556 slat (nsec): min=5700, max=95504, avg=10480.64, stdev=6804.40 00:41:28.556 clat (usec): min=3674, max=26623, avg=23551.89, stdev=1630.13 00:41:28.556 lat (usec): min=3702, max=26633, avg=23562.38, stdev=1627.44 00:41:28.556 clat percentiles (usec): 00:41:28.556 | 1.00th=[12780], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:28.556 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.556 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:41:28.556 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:41:28.556 | 99.99th=[26608] 00:41:28.556 bw ( KiB/s): min= 2554, max= 3000, per=4.19%, avg=2710.53, stdev=97.10, samples=19 00:41:28.556 iops : min= 638, max= 750, avg=677.58, stdev=24.29, samples=19 00:41:28.556 lat (msec) : 4=0.10%, 10=0.71%, 20=0.71%, 50=98.48% 00:41:28.556 cpu : usr=98.53%, sys=1.08%, ctx=117, majf=0, minf=9 00:41:28.556 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename1: (groupid=0, jobs=1): err= 0: pid=3028510: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10003msec) 00:41:28.556 slat (usec): min=5, max=103, avg=31.14, stdev=16.03 00:41:28.556 clat (usec): min=9103, max=40028, avg=23527.30, stdev=1155.90 00:41:28.556 lat (usec): min=9109, max=40046, avg=23558.43, stdev=1156.37 00:41:28.556 clat percentiles (usec): 00:41:28.556 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23200], 00:41:28.556 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:41:28.556 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.556 | 99.00th=[24511], 99.50th=[24773], 99.90th=[40109], 99.95th=[40109], 00:41:28.556 | 99.99th=[40109] 00:41:28.556 bw ( KiB/s): min= 2565, max= 2688, per=4.14%, avg=2680.26, stdev=28.02, samples=19 00:41:28.556 iops : min= 641, max= 672, avg=669.95, stdev= 7.06, samples=19 00:41:28.556 lat (msec) : 10=0.21%, 20=0.45%, 50=99.35% 00:41:28.556 cpu : usr=98.48%, sys=1.07%, ctx=68, majf=0, minf=9 00:41:28.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename1: (groupid=0, jobs=1): err= 0: pid=3028511: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=672, BW=2692KiB/s (2757kB/s)(26.3MiB/10009msec) 00:41:28.556 slat (usec): min=5, max=111, avg=24.29, stdev=17.65 00:41:28.556 clat (usec): min=13892, max=29286, avg=23580.87, stdev=791.39 00:41:28.556 lat (usec): min=13910, max=29293, avg=23605.16, stdev=790.02 00:41:28.556 clat percentiles (usec): 00:41:28.556 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:41:28.556 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:28.556 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.556 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:41:28.556 | 99.99th=[29230] 00:41:28.556 bw ( KiB/s): min= 2682, max= 2688, per=4.15%, avg=2687.37, stdev= 1.89, samples=19 00:41:28.556 iops : min= 670, max= 672, avg=671.79, stdev= 0.63, samples=19 00:41:28.556 lat (msec) : 20=0.74%, 50=99.26% 00:41:28.556 cpu : usr=98.75%, sys=0.89%, ctx=75, majf=0, minf=9 00:41:28.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename1: (groupid=0, jobs=1): err= 0: pid=3028512: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=676, BW=2704KiB/s (2769kB/s)(26.4MiB/10011msec) 00:41:28.556 slat (usec): min=5, max=106, avg=10.41, stdev= 6.41 00:41:28.556 clat (usec): min=5065, max=24997, avg=23578.26, stdev=1540.91 00:41:28.556 lat (usec): min=5073, max=25003, avg=23588.66, stdev=1539.07 00:41:28.556 clat percentiles (usec): 00:41:28.556 | 1.00th=[17433], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:28.556 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.556 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:28.556 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:41:28.556 | 99.99th=[25035] 00:41:28.556 bw ( KiB/s): min= 2554, max= 2944, per=4.18%, avg=2707.58, stdev=88.27, samples=19 00:41:28.556 iops : min= 638, max= 736, avg=676.84, stdev=22.08, samples=19 00:41:28.556 lat (msec) : 10=0.68%, 20=0.74%, 50=98.58% 00:41:28.556 cpu : usr=98.83%, sys=0.75%, ctx=84, majf=0, minf=9 00:41:28.556 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename1: (groupid=0, jobs=1): err= 0: pid=3028513: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10003msec) 00:41:28.556 slat (nsec): min=4626, max=53459, avg=14101.03, stdev=8456.10 00:41:28.556 clat (usec): min=11535, max=36656, avg=23682.96, stdev=1047.32 00:41:28.556 lat (usec): min=11542, max=36671, avg=23697.06, stdev=1047.32 00:41:28.556 clat percentiles (usec): 00:41:28.556 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:28.556 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.556 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:41:28.556 | 99.00th=[25035], 99.50th=[25297], 99.90th=[36439], 99.95th=[36439], 00:41:28.556 | 99.99th=[36439] 00:41:28.556 bw ( KiB/s): min= 2565, max= 2688, per=4.14%, avg=2680.26, stdev=28.02, samples=19 00:41:28.556 iops : min= 641, max= 672, avg=669.95, stdev= 7.06, samples=19 00:41:28.556 lat (msec) : 20=0.60%, 50=99.40% 00:41:28.556 cpu : usr=98.99%, sys=0.74%, ctx=33, majf=0, minf=9 00:41:28.556 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:28.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.556 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.556 filename1: (groupid=0, jobs=1): err= 0: pid=3028514: Wed Nov 20 06:52:47 2024 00:41:28.556 read: IOPS=689, BW=2757KiB/s (2823kB/s)(27.0MiB/10011msec) 00:41:28.556 slat (nsec): min=5690, max=99633, avg=22179.61, stdev=16404.90 00:41:28.557 clat (usec): min=4330, max=42252, avg=23030.39, stdev=3495.49 00:41:28.557 lat (usec): min=4353, max=42283, avg=23052.57, stdev=3497.54 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[11994], 5.00th=[16319], 10.00th=[19006], 20.00th=[22938], 00:41:28.557 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:41:28.557 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[26608], 00:41:28.557 | 99.00th=[36963], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:41:28.557 | 99.99th=[42206] 00:41:28.557 bw ( KiB/s): min= 2682, max= 3296, per=4.26%, avg=2756.42, stdev=145.59, samples=19 00:41:28.557 iops : min= 670, max= 824, avg=689.05, stdev=36.43, samples=19 00:41:28.557 lat (msec) : 10=0.78%, 20=11.42%, 50=87.80% 00:41:28.557 cpu : usr=98.45%, sys=1.11%, ctx=87, majf=0, minf=11 00:41:28.557 IO depths : 1=4.3%, 2=8.6%, 4=18.5%, 8=60.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:41:28.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 complete : 0=0.0%, 4=92.4%, 8=2.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 issued rwts: total=6900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.557 filename1: (groupid=0, jobs=1): err= 0: pid=3028515: Wed Nov 20 06:52:47 2024 00:41:28.557 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.4MiB/10019msec) 00:41:28.557 slat (usec): min=5, max=110, avg=13.51, stdev=14.34 00:41:28.557 clat (usec): min=9154, max=27414, avg=23579.38, stdev=1347.26 00:41:28.557 lat (usec): min=9181, max=27421, avg=23592.90, stdev=1344.34 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[15533], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:28.557 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.557 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:41:28.557 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:41:28.557 | 99.99th=[27395] 00:41:28.557 bw ( KiB/s): min= 2682, max= 2949, per=4.17%, avg=2701.00, stdev=58.40, samples=20 00:41:28.557 iops : min= 670, max= 737, avg=675.20, stdev=14.56, samples=20 00:41:28.557 lat (msec) : 10=0.46%, 20=0.96%, 50=98.58% 00:41:28.557 cpu : usr=98.53%, sys=1.12%, ctx=98, majf=0, minf=9 00:41:28.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:28.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.557 filename1: (groupid=0, jobs=1): err= 0: pid=3028516: Wed Nov 20 06:52:47 2024 00:41:28.557 read: IOPS=672, BW=2689KiB/s (2753kB/s)(26.3MiB/10009msec) 00:41:28.557 slat (nsec): min=4483, max=93408, avg=29009.93, stdev=16300.51 00:41:28.557 clat (usec): min=8728, max=40637, avg=23527.52, stdev=1414.44 00:41:28.557 lat (usec): min=8739, max=40649, avg=23556.53, stdev=1414.80 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[18744], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:41:28.557 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:41:28.557 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.557 | 99.00th=[27395], 99.50th=[30540], 99.90th=[36963], 99.95th=[37487], 00:41:28.557 | 99.99th=[40633] 00:41:28.557 bw ( KiB/s): min= 2501, max= 2810, per=4.15%, avg=2683.63, stdev=67.01, samples=19 00:41:28.557 iops : min= 625, max= 702, avg=670.79, stdev=16.69, samples=19 00:41:28.557 lat (msec) : 10=0.06%, 20=1.25%, 50=98.69% 00:41:28.557 cpu : usr=98.54%, sys=1.04%, ctx=73, majf=0, minf=9 00:41:28.557 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:28.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 issued rwts: total=6728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.557 filename1: (groupid=0, jobs=1): err= 0: pid=3028517: Wed Nov 20 06:52:47 2024 00:41:28.557 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.2MiB/10002msec) 00:41:28.557 slat (nsec): min=5567, max=42749, avg=11124.40, stdev=6235.76 00:41:28.557 clat (usec): min=9897, max=34100, avg=23716.19, stdev=1101.73 00:41:28.557 lat (usec): min=9903, max=34123, avg=23727.31, stdev=1101.89 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[20317], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:41:28.557 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.557 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:28.557 | 99.00th=[25297], 99.50th=[28705], 99.90th=[33817], 99.95th=[34341], 00:41:28.557 | 99.99th=[34341] 00:41:28.557 bw ( KiB/s): min= 2565, max= 2688, per=4.14%, avg=2680.26, stdev=28.02, samples=19 00:41:28.557 iops : min= 641, max= 672, avg=669.95, stdev= 7.06, samples=19 00:41:28.557 lat (msec) : 10=0.10%, 20=0.76%, 50=99.14% 00:41:28.557 cpu : usr=98.76%, sys=0.87%, ctx=107, majf=0, minf=9 00:41:28.557 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:28.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.557 filename2: (groupid=0, jobs=1): err= 0: pid=3028518: Wed Nov 20 06:52:47 2024 00:41:28.557 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10004msec) 00:41:28.557 slat (usec): min=5, max=106, avg=28.13, stdev=17.97 00:41:28.557 clat (usec): min=13895, max=25144, avg=23525.38, stdev=871.36 00:41:28.557 lat (usec): min=13926, max=25163, avg=23553.51, stdev=871.08 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:41:28.557 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:28.557 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.557 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:41:28.557 | 99.99th=[25035] 00:41:28.557 bw ( KiB/s): min= 2682, max= 2688, per=4.15%, avg=2686.42, stdev= 2.71, samples=19 00:41:28.557 iops : min= 670, max= 672, avg=671.47, stdev= 0.90, samples=19 00:41:28.557 lat (msec) : 20=0.95%, 50=99.05% 00:41:28.557 cpu : usr=98.73%, sys=0.85%, ctx=177, majf=0, minf=9 00:41:28.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:28.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.557 filename2: (groupid=0, jobs=1): err= 0: pid=3028519: Wed Nov 20 06:52:47 2024 00:41:28.557 read: IOPS=679, BW=2716KiB/s (2781kB/s)(26.5MiB/10002msec) 00:41:28.557 slat (nsec): min=5693, max=96246, avg=26765.11, stdev=14900.43 00:41:28.557 clat (usec): min=5003, max=39980, avg=23321.17, stdev=2177.57 00:41:28.557 lat (usec): min=5009, max=39998, avg=23347.94, stdev=2179.80 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[14091], 5.00th=[21627], 10.00th=[23200], 20.00th=[23200], 00:41:28.557 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:41:28.557 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.557 | 99.00th=[27657], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:41:28.557 | 99.99th=[40109] 00:41:28.557 bw ( KiB/s): min= 2565, max= 2992, per=4.19%, avg=2709.74, stdev=91.73, samples=19 00:41:28.557 iops : min= 641, max= 748, avg=677.32, stdev=22.99, samples=19 00:41:28.557 lat (msec) : 10=0.03%, 20=4.20%, 50=95.77% 00:41:28.557 cpu : usr=98.95%, sys=0.77%, ctx=53, majf=0, minf=9 00:41:28.557 IO depths : 1=5.0%, 2=10.7%, 4=23.1%, 8=53.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:41:28.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 issued rwts: total=6792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.557 filename2: (groupid=0, jobs=1): err= 0: pid=3028520: Wed Nov 20 06:52:47 2024 00:41:28.557 read: IOPS=679, BW=2718KiB/s (2783kB/s)(26.6MiB/10006msec) 00:41:28.557 slat (nsec): min=5692, max=96728, avg=12329.50, stdev=9393.42 00:41:28.557 clat (usec): min=7865, max=44581, avg=23455.15, stdev=2092.12 00:41:28.557 lat (usec): min=7871, max=44588, avg=23467.48, stdev=2091.55 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[14091], 5.00th=[20055], 10.00th=[23200], 20.00th=[23462], 00:41:28.557 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.557 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:41:28.557 | 99.00th=[28181], 99.50th=[28705], 99.90th=[39584], 99.95th=[39584], 00:41:28.557 | 99.99th=[44827] 00:41:28.557 bw ( KiB/s): min= 2618, max= 2992, per=4.20%, avg=2719.26, stdev=84.19, samples=19 00:41:28.557 iops : min= 654, max= 748, avg=679.68, stdev=21.06, samples=19 00:41:28.557 lat (msec) : 10=0.38%, 20=4.09%, 50=95.53% 00:41:28.557 cpu : usr=98.73%, sys=0.98%, ctx=91, majf=0, minf=9 00:41:28.557 IO depths : 1=4.6%, 2=9.7%, 4=21.7%, 8=55.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:41:28.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.557 issued rwts: total=6798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.557 filename2: (groupid=0, jobs=1): err= 0: pid=3028521: Wed Nov 20 06:52:47 2024 00:41:28.557 read: IOPS=672, BW=2689KiB/s (2753kB/s)(26.3MiB/10006msec) 00:41:28.557 slat (usec): min=4, max=107, avg=32.02, stdev=19.93 00:41:28.557 clat (usec): min=8849, max=42462, avg=23494.33, stdev=1750.51 00:41:28.557 lat (usec): min=8856, max=42475, avg=23526.35, stdev=1751.63 00:41:28.557 clat percentiles (usec): 00:41:28.557 | 1.00th=[15008], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:41:28.557 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:41:28.557 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:41:28.558 | 99.00th=[28705], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:41:28.558 | 99.99th=[42206] 00:41:28.558 bw ( KiB/s): min= 2560, max= 2864, per=4.14%, avg=2682.53, stdev=59.31, samples=19 00:41:28.558 iops : min= 640, max= 716, avg=670.53, stdev=14.83, samples=19 00:41:28.558 lat (msec) : 10=0.09%, 20=1.78%, 50=98.13% 00:41:28.558 cpu : usr=98.62%, sys=0.92%, ctx=88, majf=0, minf=10 00:41:28.558 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:28.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 issued rwts: total=6726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.558 filename2: (groupid=0, jobs=1): err= 0: pid=3028522: Wed Nov 20 06:52:47 2024 00:41:28.558 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10005msec) 00:41:28.558 slat (usec): min=4, max=122, avg=29.92, stdev=21.85 00:41:28.558 clat (usec): min=11173, max=46135, avg=23549.85, stdev=1730.69 00:41:28.558 lat (usec): min=11180, max=46150, avg=23579.78, stdev=1731.34 00:41:28.558 clat percentiles (usec): 00:41:28.558 | 1.00th=[16581], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:41:28.558 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:41:28.558 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24511], 00:41:28.558 | 99.00th=[29492], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:41:28.558 | 99.99th=[45876] 00:41:28.558 bw ( KiB/s): min= 2536, max= 2736, per=4.13%, avg=2676.63, stdev=47.32, samples=19 00:41:28.558 iops : min= 634, max= 684, avg=669.05, stdev=11.82, samples=19 00:41:28.558 lat (msec) : 20=1.85%, 50=98.15% 00:41:28.558 cpu : usr=99.14%, sys=0.59%, ctx=14, majf=0, minf=9 00:41:28.558 IO depths : 1=5.3%, 2=11.1%, 4=23.7%, 8=52.5%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:28.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 issued rwts: total=6712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.558 filename2: (groupid=0, jobs=1): err= 0: pid=3028523: Wed Nov 20 06:52:47 2024 00:41:28.558 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.5MiB/10011msec) 00:41:28.558 slat (usec): min=5, max=102, avg=24.17, stdev=17.07 00:41:28.558 clat (usec): min=4410, max=42666, avg=23443.41, stdev=3371.11 00:41:28.558 lat (usec): min=4429, max=42687, avg=23467.58, stdev=3371.64 00:41:28.558 clat percentiles (usec): 00:41:28.558 | 1.00th=[10683], 5.00th=[17957], 10.00th=[21365], 20.00th=[23200], 00:41:28.558 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:41:28.558 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[28181], 00:41:28.558 | 99.00th=[38011], 99.50th=[40633], 99.90th=[42206], 99.95th=[42730], 00:41:28.558 | 99.99th=[42730] 00:41:28.558 bw ( KiB/s): min= 2560, max= 3072, per=4.19%, avg=2711.79, stdev=121.00, samples=19 00:41:28.558 iops : min= 640, max= 768, avg=677.89, stdev=30.28, samples=19 00:41:28.558 lat (msec) : 10=0.69%, 20=7.19%, 50=92.12% 00:41:28.558 cpu : usr=98.46%, sys=1.19%, ctx=81, majf=0, minf=9 00:41:28.558 IO depths : 1=4.3%, 2=8.5%, 4=19.0%, 8=59.7%, 16=8.5%, 32=0.0%, >=64=0.0% 00:41:28.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 complete : 0=0.0%, 4=92.5%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.558 filename2: (groupid=0, jobs=1): err= 0: pid=3028524: Wed Nov 20 06:52:47 2024 00:41:28.558 read: IOPS=682, BW=2731KiB/s (2796kB/s)(26.7MiB/10010msec) 00:41:28.558 slat (usec): min=4, max=100, avg=25.71, stdev=16.63 00:41:28.558 clat (usec): min=6985, max=41619, avg=23200.29, stdev=2449.14 00:41:28.558 lat (usec): min=6994, max=41632, avg=23226.00, stdev=2451.02 00:41:28.558 clat percentiles (usec): 00:41:28.558 | 1.00th=[14222], 5.00th=[18220], 10.00th=[21890], 20.00th=[23200], 00:41:28.558 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23462], 00:41:28.558 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24511], 00:41:28.558 | 99.00th=[31327], 99.50th=[33817], 99.90th=[36963], 99.95th=[39060], 00:41:28.558 | 99.99th=[41681] 00:41:28.558 bw ( KiB/s): min= 2560, max= 2896, per=4.22%, avg=2728.00, stdev=85.34, samples=19 00:41:28.558 iops : min= 640, max= 724, avg=681.89, stdev=21.31, samples=19 00:41:28.558 lat (msec) : 10=0.06%, 20=6.97%, 50=92.98% 00:41:28.558 cpu : usr=98.90%, sys=0.82%, ctx=34, majf=0, minf=11 00:41:28.558 IO depths : 1=4.7%, 2=9.5%, 4=20.1%, 8=57.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:41:28.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 issued rwts: total=6834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.558 filename2: (groupid=0, jobs=1): err= 0: pid=3028525: Wed Nov 20 06:52:47 2024 00:41:28.558 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.4MiB/10020msec) 00:41:28.558 slat (nsec): min=5715, max=67423, avg=12105.58, stdev=7730.40 00:41:28.558 clat (usec): min=9316, max=30144, avg=23583.92, stdev=1449.66 00:41:28.558 lat (usec): min=9368, max=30152, avg=23596.03, stdev=1448.18 00:41:28.558 clat percentiles (usec): 00:41:28.558 | 1.00th=[16581], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:41:28.558 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:41:28.558 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:41:28.558 | 99.00th=[25035], 99.50th=[25297], 99.90th=[29754], 99.95th=[29754], 00:41:28.558 | 99.99th=[30016] 00:41:28.558 bw ( KiB/s): min= 2682, max= 2944, per=4.17%, avg=2700.75, stdev=57.28, samples=20 00:41:28.558 iops : min= 670, max= 736, avg=675.15, stdev=14.33, samples=20 00:41:28.558 lat (msec) : 10=0.47%, 20=1.03%, 50=98.49% 00:41:28.558 cpu : usr=97.40%, sys=1.69%, ctx=660, majf=0, minf=9 00:41:28.558 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:41:28.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:28.558 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:28.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:28.558 00:41:28.558 Run status group 0 (all jobs): 00:41:28.558 READ: bw=63.2MiB/s (66.3MB/s), 2658KiB/s-2757KiB/s (2722kB/s-2823kB/s), io=633MiB (664MB), run=10002-10020msec 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:28.558 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 bdev_null0 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 [2024-11-20 06:52:47.378838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 bdev_null1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:28.559 { 00:41:28.559 "params": { 00:41:28.559 "name": "Nvme$subsystem", 00:41:28.559 "trtype": "$TEST_TRANSPORT", 00:41:28.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.559 "adrfam": "ipv4", 00:41:28.559 "trsvcid": "$NVMF_PORT", 00:41:28.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.559 "hdgst": ${hdgst:-false}, 00:41:28.559 "ddgst": ${ddgst:-false} 00:41:28.559 }, 00:41:28.559 "method": "bdev_nvme_attach_controller" 00:41:28.559 } 00:41:28.559 EOF 00:41:28.559 )") 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:28.559 { 00:41:28.559 "params": { 00:41:28.559 "name": "Nvme$subsystem", 00:41:28.559 "trtype": "$TEST_TRANSPORT", 00:41:28.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.559 "adrfam": "ipv4", 00:41:28.559 "trsvcid": "$NVMF_PORT", 00:41:28.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.559 "hdgst": ${hdgst:-false}, 00:41:28.559 "ddgst": ${ddgst:-false} 00:41:28.559 }, 00:41:28.559 "method": "bdev_nvme_attach_controller" 00:41:28.559 } 00:41:28.559 EOF 00:41:28.559 )") 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:28.559 "params": { 00:41:28.559 "name": "Nvme0", 00:41:28.559 "trtype": "tcp", 00:41:28.559 "traddr": "10.0.0.2", 00:41:28.559 "adrfam": "ipv4", 00:41:28.559 "trsvcid": "4420", 00:41:28.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:28.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:28.559 "hdgst": false, 00:41:28.559 "ddgst": false 00:41:28.559 }, 00:41:28.559 "method": "bdev_nvme_attach_controller" 00:41:28.559 },{ 00:41:28.559 "params": { 00:41:28.559 "name": "Nvme1", 00:41:28.559 "trtype": "tcp", 00:41:28.559 "traddr": "10.0.0.2", 00:41:28.559 "adrfam": "ipv4", 00:41:28.559 "trsvcid": "4420", 00:41:28.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:28.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:28.559 "hdgst": false, 00:41:28.559 "ddgst": false 00:41:28.559 }, 00:41:28.559 "method": "bdev_nvme_attach_controller" 00:41:28.559 }' 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:28.559 06:52:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:28.559 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:28.559 ... 00:41:28.559 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:28.559 ... 00:41:28.559 fio-3.35 00:41:28.559 Starting 4 threads 00:41:33.848 00:41:33.848 filename0: (groupid=0, jobs=1): err= 0: pid=3030874: Wed Nov 20 06:52:53 2024 00:41:33.848 read: IOPS=2982, BW=23.3MiB/s (24.4MB/s)(117MiB/5002msec) 00:41:33.848 slat (nsec): min=5507, max=70780, avg=8360.12, stdev=3480.28 00:41:33.848 clat (usec): min=1062, max=4865, avg=2658.59, stdev=253.98 00:41:33.848 lat (usec): min=1070, max=4873, avg=2666.95, stdev=254.19 00:41:33.848 clat percentiles (usec): 00:41:33.848 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:41:33.848 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:41:33.848 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2868], 95.00th=[ 2966], 00:41:33.848 | 99.00th=[ 3687], 99.50th=[ 3916], 99.90th=[ 4359], 99.95th=[ 4621], 00:41:33.848 | 99.99th=[ 4883] 00:41:33.848 bw ( KiB/s): min=23438, max=24224, per=25.19%, avg=23861.11, stdev=269.67, samples=9 00:41:33.848 iops : min= 2929, max= 3028, avg=2982.56, stdev=33.86, samples=9 00:41:33.848 lat (msec) : 2=1.35%, 4=98.27%, 10=0.38% 00:41:33.848 cpu : usr=96.96%, sys=2.76%, ctx=8, majf=0, minf=57 00:41:33.848 IO depths : 1=0.1%, 2=0.4%, 4=73.0%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:33.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.848 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.848 issued rwts: total=14918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.848 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:33.848 filename0: (groupid=0, jobs=1): err= 0: pid=3030875: Wed Nov 20 06:52:53 2024 00:41:33.849 read: IOPS=2964, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:41:33.849 slat (nsec): min=5507, max=96252, avg=7393.52, stdev=3460.88 00:41:33.849 clat (usec): min=962, max=5264, avg=2678.32, stdev=270.24 00:41:33.849 lat (usec): min=970, max=5289, avg=2685.71, stdev=270.55 00:41:33.849 clat percentiles (usec): 00:41:33.849 | 1.00th=[ 1942], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2573], 00:41:33.849 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:41:33.849 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2999], 00:41:33.849 | 99.00th=[ 3818], 99.50th=[ 4146], 99.90th=[ 4686], 99.95th=[ 5014], 00:41:33.849 | 99.99th=[ 5211] 00:41:33.849 bw ( KiB/s): min=23248, max=24016, per=25.00%, avg=23676.44, stdev=312.39, samples=9 00:41:33.849 iops : min= 2906, max= 3002, avg=2959.56, stdev=39.05, samples=9 00:41:33.849 lat (usec) : 1000=0.03% 00:41:33.849 lat (msec) : 2=1.23%, 4=98.10%, 10=0.65% 00:41:33.849 cpu : usr=96.86%, sys=2.84%, ctx=17, majf=0, minf=108 00:41:33.849 IO depths : 1=0.1%, 2=0.2%, 4=71.7%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:33.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.849 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.849 issued rwts: total=14827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:33.849 filename1: (groupid=0, jobs=1): err= 0: pid=3030876: Wed Nov 20 06:52:53 2024 00:41:33.849 read: IOPS=2971, BW=23.2MiB/s (24.3MB/s)(116MiB/5002msec) 00:41:33.849 slat (nsec): min=5527, max=56991, avg=8245.08, stdev=3537.49 00:41:33.849 clat (usec): min=982, max=4849, avg=2669.74, stdev=243.58 00:41:33.849 lat (usec): min=988, max=4858, avg=2677.99, stdev=243.72 00:41:33.849 clat percentiles (usec): 00:41:33.849 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2573], 00:41:33.849 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:41:33.849 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2900], 95.00th=[ 2999], 00:41:33.849 | 99.00th=[ 3621], 99.50th=[ 3916], 99.90th=[ 4424], 99.95th=[ 4621], 00:41:33.849 | 99.99th=[ 4817] 00:41:33.849 bw ( KiB/s): min=23456, max=24016, per=25.08%, avg=23756.44, stdev=212.23, samples=9 00:41:33.849 iops : min= 2932, max= 3002, avg=2969.56, stdev=26.53, samples=9 00:41:33.849 lat (usec) : 1000=0.02% 00:41:33.849 lat (msec) : 2=0.98%, 4=98.67%, 10=0.33% 00:41:33.849 cpu : usr=96.54%, sys=3.20%, ctx=10, majf=0, minf=114 00:41:33.849 IO depths : 1=0.1%, 2=0.3%, 4=72.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:33.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.849 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.849 issued rwts: total=14863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:33.849 filename1: (groupid=0, jobs=1): err= 0: pid=3030877: Wed Nov 20 06:52:53 2024 00:41:33.849 read: IOPS=2922, BW=22.8MiB/s (23.9MB/s)(114MiB/5001msec) 00:41:33.849 slat (nsec): min=8037, max=92178, avg=9536.05, stdev=3617.13 00:41:33.849 clat (usec): min=1141, max=5818, avg=2712.81, stdev=324.11 00:41:33.849 lat (usec): min=1152, max=5842, avg=2722.35, stdev=323.99 00:41:33.849 clat percentiles (usec): 00:41:33.849 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2606], 00:41:33.849 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:41:33.849 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 2999], 95.00th=[ 3326], 00:41:33.849 | 99.00th=[ 3982], 99.50th=[ 4047], 99.90th=[ 4883], 99.95th=[ 5538], 00:41:33.849 | 99.99th=[ 5735] 00:41:33.849 bw ( KiB/s): min=22704, max=23808, per=24.71%, avg=23402.67, stdev=353.99, samples=9 00:41:33.849 iops : min= 2838, max= 2976, avg=2925.33, stdev=44.25, samples=9 00:41:33.849 lat (msec) : 2=1.16%, 4=98.13%, 10=0.70% 00:41:33.849 cpu : usr=97.02%, sys=2.70%, ctx=15, majf=0, minf=89 00:41:33.849 IO depths : 1=0.1%, 2=0.8%, 4=69.9%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:33.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.849 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.849 issued rwts: total=14614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.849 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:33.849 00:41:33.849 Run status group 0 (all jobs): 00:41:33.849 READ: bw=92.5MiB/s (97.0MB/s), 22.8MiB/s-23.3MiB/s (23.9MB/s-24.4MB/s), io=463MiB (485MB), run=5001-5002msec 00:41:34.111 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:34.111 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:34.111 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:34.111 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:34.111 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:34.111 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:34.111 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 00:41:34.112 real 0m24.433s 00:41:34.112 user 5m14.164s 00:41:34.112 sys 0m4.834s 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 ************************************ 00:41:34.112 END TEST fio_dif_rand_params 00:41:34.112 ************************************ 00:41:34.112 06:52:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:34.112 06:52:53 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:34.112 06:52:53 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 ************************************ 00:41:34.112 START TEST fio_dif_digest 00:41:34.112 ************************************ 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 bdev_null0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:34.112 [2024-11-20 06:52:53.913033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:34.112 { 00:41:34.112 "params": { 00:41:34.112 "name": "Nvme$subsystem", 00:41:34.112 "trtype": "$TEST_TRANSPORT", 00:41:34.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:34.112 "adrfam": "ipv4", 00:41:34.112 "trsvcid": "$NVMF_PORT", 00:41:34.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:34.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:34.112 "hdgst": ${hdgst:-false}, 00:41:34.112 "ddgst": ${ddgst:-false} 00:41:34.112 }, 00:41:34.112 "method": "bdev_nvme_attach_controller" 00:41:34.112 } 00:41:34.112 EOF 00:41:34.112 )") 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:34.112 06:52:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:34.112 "params": { 00:41:34.112 "name": "Nvme0", 00:41:34.112 "trtype": "tcp", 00:41:34.112 "traddr": "10.0.0.2", 00:41:34.113 "adrfam": "ipv4", 00:41:34.113 "trsvcid": "4420", 00:41:34.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:34.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:34.113 "hdgst": true, 00:41:34.113 "ddgst": true 00:41:34.113 }, 00:41:34.113 "method": "bdev_nvme_attach_controller" 00:41:34.113 }' 00:41:34.113 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:34.113 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:34.113 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:41:34.113 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:34.113 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:41:34.113 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:41:34.113 06:52:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:41:34.113 06:52:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:41:34.113 06:52:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:34.113 06:52:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:34.703 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:34.703 ... 00:41:34.703 fio-3.35 00:41:34.703 Starting 3 threads 00:41:47.026 00:41:47.026 filename0: (groupid=0, jobs=1): err= 0: pid=3032077: Wed Nov 20 06:53:04 2024 00:41:47.026 read: IOPS=341, BW=42.7MiB/s (44.7MB/s)(429MiB/10045msec) 00:41:47.026 slat (nsec): min=5885, max=32506, avg=6506.21, stdev=987.95 00:41:47.026 clat (usec): min=5867, max=50457, avg=8770.84, stdev=1948.51 00:41:47.026 lat (usec): min=5873, max=50464, avg=8777.35, stdev=1948.54 00:41:47.026 clat percentiles (usec): 00:41:47.026 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7504], 00:41:47.026 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9241], 00:41:47.026 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10552], 00:41:47.026 | 99.00th=[11207], 99.50th=[11469], 99.90th=[49021], 99.95th=[50594], 00:41:47.026 | 99.99th=[50594] 00:41:47.026 bw ( KiB/s): min=39168, max=47104, per=40.01%, avg=43852.80, stdev=1684.04, samples=20 00:41:47.026 iops : min= 306, max= 368, avg=342.60, stdev=13.16, samples=20 00:41:47.026 lat (msec) : 10=84.54%, 20=15.32%, 50=0.06%, 100=0.09% 00:41:47.026 cpu : usr=94.41%, sys=5.37%, ctx=9, majf=0, minf=71 00:41:47.026 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:47.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.026 issued rwts: total=3428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.026 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:47.026 filename0: (groupid=0, jobs=1): err= 0: pid=3032078: Wed Nov 20 06:53:04 2024 00:41:47.026 read: IOPS=333, BW=41.7MiB/s (43.8MB/s)(419MiB/10045msec) 00:41:47.026 slat (nsec): min=5862, max=32665, avg=6535.95, stdev=847.14 00:41:47.026 clat (usec): min=4848, max=48573, avg=8966.53, stdev=1605.21 00:41:47.026 lat (usec): min=4854, max=48580, avg=8973.06, stdev=1605.24 00:41:47.026 clat percentiles (usec): 00:41:47.026 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7635], 00:41:47.026 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9503], 00:41:47.026 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076], 00:41:47.026 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12518], 99.95th=[45876], 00:41:47.026 | 99.99th=[48497] 00:41:47.026 bw ( KiB/s): min=39936, max=45568, per=39.13%, avg=42892.80, stdev=1537.07, samples=20 00:41:47.026 iops : min= 312, max= 356, avg=335.10, stdev=12.01, samples=20 00:41:47.026 lat (msec) : 10=75.87%, 20=24.07%, 50=0.06% 00:41:47.026 cpu : usr=93.97%, sys=5.80%, ctx=24, majf=0, minf=181 00:41:47.026 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:47.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.026 issued rwts: total=3353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.026 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:47.026 filename0: (groupid=0, jobs=1): err= 0: pid=3032079: Wed Nov 20 06:53:04 2024 00:41:47.026 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(228MiB/10005msec) 00:41:47.026 slat (nsec): min=5926, max=31143, avg=6658.60, stdev=1083.77 00:41:47.026 clat (msec): min=7, max=131, avg=16.47, stdev=16.25 00:41:47.026 lat (msec): min=7, max=131, avg=16.48, stdev=16.25 00:41:47.026 clat percentiles (msec): 00:41:47.026 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:41:47.026 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:41:47.026 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 51], 95.00th=[ 52], 00:41:47.026 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 93], 99.95th=[ 132], 00:41:47.026 | 99.99th=[ 132] 00:41:47.026 bw ( KiB/s): min=13824, max=33280, per=21.23%, avg=23270.40, stdev=4674.81, samples=20 00:41:47.026 iops : min= 108, max= 260, avg=181.80, stdev=36.52, samples=20 00:41:47.026 lat (msec) : 10=36.02%, 20=49.86%, 50=2.75%, 100=11.31%, 250=0.05% 00:41:47.026 cpu : usr=95.50%, sys=4.29%, ctx=13, majf=0, minf=100 00:41:47.026 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:47.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:47.026 issued rwts: total=1821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:47.026 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:47.026 00:41:47.026 Run status group 0 (all jobs): 00:41:47.026 READ: bw=107MiB/s (112MB/s), 22.8MiB/s-42.7MiB/s (23.9MB/s-44.7MB/s), io=1075MiB (1127MB), run=10005-10045msec 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:47.026 06:53:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:47.026 06:53:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:47.026 00:41:47.026 real 0m11.144s 00:41:47.026 user 0m42.331s 00:41:47.027 sys 0m1.877s 00:41:47.027 06:53:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:47.027 06:53:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:47.027 ************************************ 00:41:47.027 END TEST fio_dif_digest 00:41:47.027 ************************************ 00:41:47.027 06:53:05 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:47.027 06:53:05 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:47.027 rmmod nvme_tcp 00:41:47.027 rmmod nvme_fabrics 00:41:47.027 rmmod nvme_keyring 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3022057 ']' 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3022057 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3022057 ']' 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3022057 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3022057 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3022057' 00:41:47.027 killing process with pid 3022057 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3022057 00:41:47.027 06:53:05 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3022057 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:47.027 06:53:05 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:48.940 Waiting for block devices as requested 00:41:48.940 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:48.940 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:49.201 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:49.201 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:49.201 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:49.462 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:49.462 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:49.462 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:49.723 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:49.723 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:49.984 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:49.984 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:49.984 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:50.244 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:50.244 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:50.245 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:50.505 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:50.766 06:53:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:50.766 06:53:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:50.766 06:53:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:52.684 06:53:12 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:52.684 00:41:52.684 real 1m18.744s 00:41:52.684 user 7m50.548s 00:41:52.684 sys 0m22.550s 00:41:52.684 06:53:12 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:52.684 06:53:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:52.684 ************************************ 00:41:52.684 END TEST nvmf_dif 00:41:52.684 ************************************ 00:41:52.946 06:53:12 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:52.946 06:53:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:41:52.946 06:53:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:52.946 06:53:12 -- common/autotest_common.sh@10 -- # set +x 00:41:52.946 ************************************ 00:41:52.946 START TEST nvmf_abort_qd_sizes 00:41:52.946 ************************************ 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:52.946 * Looking for test storage... 00:41:52.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:52.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.946 --rc genhtml_branch_coverage=1 00:41:52.946 --rc genhtml_function_coverage=1 00:41:52.946 --rc genhtml_legend=1 00:41:52.946 --rc geninfo_all_blocks=1 00:41:52.946 --rc geninfo_unexecuted_blocks=1 00:41:52.946 00:41:52.946 ' 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:52.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.946 --rc genhtml_branch_coverage=1 00:41:52.946 --rc genhtml_function_coverage=1 00:41:52.946 --rc genhtml_legend=1 00:41:52.946 --rc geninfo_all_blocks=1 00:41:52.946 --rc geninfo_unexecuted_blocks=1 00:41:52.946 00:41:52.946 ' 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:52.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.946 --rc genhtml_branch_coverage=1 00:41:52.946 --rc genhtml_function_coverage=1 00:41:52.946 --rc genhtml_legend=1 00:41:52.946 --rc geninfo_all_blocks=1 00:41:52.946 --rc geninfo_unexecuted_blocks=1 00:41:52.946 00:41:52.946 ' 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:52.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:52.946 --rc genhtml_branch_coverage=1 00:41:52.946 --rc genhtml_function_coverage=1 00:41:52.946 --rc genhtml_legend=1 00:41:52.946 --rc geninfo_all_blocks=1 00:41:52.946 --rc geninfo_unexecuted_blocks=1 00:41:52.946 00:41:52.946 ' 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:52.946 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.208 06:53:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:53.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:53.209 06:53:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:01.354 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:01.355 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:01.355 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:01.355 Found net devices under 0000:31:00.0: cvl_0_0 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:01.355 Found net devices under 0000:31:00.1: cvl_0_1 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:01.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:01.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:42:01.355 00:42:01.355 --- 10.0.0.2 ping statistics --- 00:42:01.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.355 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:01.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:01.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:42:01.355 00:42:01.355 --- 10.0.0.1 ping statistics --- 00:42:01.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.355 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:01.355 06:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:04.660 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:04.660 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:04.660 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:04.660 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:04.660 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:04.660 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:04.660 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:04.660 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3041647 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3041647 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3041647 ']' 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:04.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:04.922 06:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:04.922 [2024-11-20 06:53:24.682869] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:04.922 [2024-11-20 06:53:24.682933] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:04.922 [2024-11-20 06:53:24.783850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:04.922 [2024-11-20 06:53:24.838222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:04.922 [2024-11-20 06:53:24.838277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:04.922 [2024-11-20 06:53:24.838286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:04.922 [2024-11-20 06:53:24.838294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:04.922 [2024-11-20 06:53:24.838300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:05.183 [2024-11-20 06:53:24.840440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:05.183 [2024-11-20 06:53:24.840602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:05.183 [2024-11-20 06:53:24.840783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:05.183 [2024-11-20 06:53:24.840786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:05.754 06:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:05.754 ************************************ 00:42:05.754 START TEST spdk_target_abort 00:42:05.754 ************************************ 00:42:05.754 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:42:05.754 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:05.754 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:42:05.754 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.754 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:06.016 spdk_targetn1 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:06.016 [2024-11-20 06:53:25.899089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.016 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:06.277 [2024-11-20 06:53:25.947556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:06.277 06:53:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:06.277 [2024-11-20 06:53:26.113422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:472 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:42:06.277 [2024-11-20 06:53:26.113473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:42:06.277 [2024-11-20 06:53:26.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1168 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:42:06.277 [2024-11-20 06:53:26.137413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0095 p:1 m:0 dnr:0 00:42:06.277 [2024-11-20 06:53:26.145290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1408 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:42:06.277 [2024-11-20 06:53:26.145317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b3 p:1 m:0 dnr:0 00:42:06.277 [2024-11-20 06:53:26.169353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2112 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:42:06.277 [2024-11-20 06:53:26.169385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:42:06.538 [2024-11-20 06:53:26.194914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3000 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:42:06.538 [2024-11-20 06:53:26.194947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:42:06.538 [2024-11-20 06:53:26.227558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3984 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:42:06.538 [2024-11-20 06:53:26.227590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:42:06.538 [2024-11-20 06:53:26.228394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:4040 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:42:06.538 [2024-11-20 06:53:26.228412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:42:09.839 Initializing NVMe Controllers 00:42:09.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:09.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:09.839 Initialization complete. Launching workers. 00:42:09.839 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11398, failed: 7 00:42:09.839 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2254, failed to submit 9151 00:42:09.839 success 783, unsuccessful 1471, failed 0 00:42:09.839 06:53:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:09.839 06:53:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:09.839 [2024-11-20 06:53:29.460100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:472 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:42:09.839 [2024-11-20 06:53:29.460130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:42:09.839 [2024-11-20 06:53:29.510453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:1592 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:42:09.839 [2024-11-20 06:53:29.510480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:00cc p:1 m:0 dnr:0 00:42:09.839 [2024-11-20 06:53:29.521643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:1728 len:8 PRP1 0x200004e54000 PRP2 0x0 00:42:09.839 [2024-11-20 06:53:29.521665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:00da p:1 m:0 dnr:0 00:42:09.839 [2024-11-20 06:53:29.537913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:1976 len:8 PRP1 0x200004e46000 PRP2 0x0 00:42:09.839 [2024-11-20 06:53:29.537935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:42:09.839 [2024-11-20 06:53:29.560900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:2640 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:42:09.839 [2024-11-20 06:53:29.560922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:42:09.839 [2024-11-20 06:53:29.616927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:3984 len:8 PRP1 0x200004e58000 PRP2 0x0 00:42:09.839 [2024-11-20 06:53:29.616950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:42:11.751 [2024-11-20 06:53:31.548706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:49048 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:42:11.751 [2024-11-20 06:53:31.548737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:00f4 p:0 m:0 dnr:0 00:42:12.688 Initializing NVMe Controllers 00:42:12.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:12.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:12.688 Initialization complete. Launching workers. 00:42:12.688 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8713, failed: 7 00:42:12.688 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7492 00:42:12.688 success 324, unsuccessful 904, failed 0 00:42:12.688 06:53:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:12.688 06:53:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:15.230 [2024-11-20 06:53:34.979063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:156 nsid:1 lba:267160 len:8 PRP1 0x200004b14000 PRP2 0x0 00:42:15.230 [2024-11-20 06:53:34.979107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:156 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:42:16.171 Initializing NVMe Controllers 00:42:16.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:16.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:16.171 Initialization complete. Launching workers. 00:42:16.171 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43929, failed: 1 00:42:16.171 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2791, failed to submit 41139 00:42:16.171 success 633, unsuccessful 2158, failed 0 00:42:16.171 06:53:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:16.171 06:53:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.171 06:53:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:16.171 06:53:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.171 06:53:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:16.171 06:53:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:16.171 06:53:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3041647 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3041647 ']' 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3041647 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3041647 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3041647' 00:42:18.082 killing process with pid 3041647 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3041647 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3041647 00:42:18.082 00:42:18.082 real 0m12.170s 00:42:18.082 user 0m49.368s 00:42:18.082 sys 0m2.131s 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:18.082 ************************************ 00:42:18.082 END TEST spdk_target_abort 00:42:18.082 ************************************ 00:42:18.082 06:53:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:18.082 06:53:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:18.082 06:53:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:18.082 06:53:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:18.082 ************************************ 00:42:18.082 START TEST kernel_target_abort 00:42:18.082 ************************************ 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:18.082 06:53:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:21.381 Waiting for block devices as requested 00:42:21.642 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:21.642 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:21.642 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:21.642 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:21.903 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:21.903 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:21.903 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:22.163 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:22.163 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:22.423 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:22.423 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:22.423 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:22.684 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:22.684 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:22.684 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:22.944 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:22.944 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:23.204 No valid GPT data, bailing 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:23.204 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:42:23.465 00:42:23.465 Discovery Log Number of Records 2, Generation counter 2 00:42:23.465 =====Discovery Log Entry 0====== 00:42:23.465 trtype: tcp 00:42:23.465 adrfam: ipv4 00:42:23.465 subtype: current discovery subsystem 00:42:23.465 treq: not specified, sq flow control disable supported 00:42:23.465 portid: 1 00:42:23.465 trsvcid: 4420 00:42:23.465 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:23.465 traddr: 10.0.0.1 00:42:23.465 eflags: none 00:42:23.465 sectype: none 00:42:23.465 =====Discovery Log Entry 1====== 00:42:23.465 trtype: tcp 00:42:23.465 adrfam: ipv4 00:42:23.465 subtype: nvme subsystem 00:42:23.465 treq: not specified, sq flow control disable supported 00:42:23.465 portid: 1 00:42:23.465 trsvcid: 4420 00:42:23.465 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:23.465 traddr: 10.0.0.1 00:42:23.465 eflags: none 00:42:23.465 sectype: none 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:23.465 06:53:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:26.767 Initializing NVMe Controllers 00:42:26.767 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:26.767 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:26.767 Initialization complete. Launching workers. 00:42:26.767 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67635, failed: 0 00:42:26.767 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67635, failed to submit 0 00:42:26.767 success 0, unsuccessful 67635, failed 0 00:42:26.768 06:53:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:26.768 06:53:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:30.067 Initializing NVMe Controllers 00:42:30.067 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:30.067 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:30.067 Initialization complete. Launching workers. 00:42:30.067 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 113829, failed: 0 00:42:30.067 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28654, failed to submit 85175 00:42:30.067 success 0, unsuccessful 28654, failed 0 00:42:30.067 06:53:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:30.067 06:53:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:33.365 Initializing NVMe Controllers 00:42:33.365 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:33.365 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:33.365 Initialization complete. Launching workers. 00:42:33.365 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146413, failed: 0 00:42:33.365 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36654, failed to submit 109759 00:42:33.365 success 0, unsuccessful 36654, failed 0 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:33.365 06:53:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:36.665 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:36.665 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:38.049 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:38.621 00:42:38.621 real 0m20.505s 00:42:38.621 user 0m9.892s 00:42:38.621 sys 0m6.299s 00:42:38.621 06:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:38.621 06:53:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:38.621 ************************************ 00:42:38.621 END TEST kernel_target_abort 00:42:38.621 ************************************ 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:38.621 rmmod nvme_tcp 00:42:38.621 rmmod nvme_fabrics 00:42:38.621 rmmod nvme_keyring 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3041647 ']' 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3041647 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3041647 ']' 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3041647 00:42:38.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3041647) - No such process 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3041647 is not found' 00:42:38.621 Process with pid 3041647 is not found 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:38.621 06:53:58 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:42.007 Waiting for block devices as requested 00:42:42.007 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:42.007 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:42.268 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:42.268 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:42.268 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:42.529 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:42.529 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:42.529 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:42.789 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:42.789 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:43.051 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:43.051 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:43.051 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:43.312 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:43.312 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:43.312 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:43.572 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:43.833 06:54:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.379 06:54:05 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:46.379 00:42:46.379 real 0m53.048s 00:42:46.379 user 1m4.779s 00:42:46.379 sys 0m19.861s 00:42:46.379 06:54:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:46.379 06:54:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:46.379 ************************************ 00:42:46.379 END TEST nvmf_abort_qd_sizes 00:42:46.379 ************************************ 00:42:46.379 06:54:05 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:46.379 06:54:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:46.379 06:54:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:46.379 06:54:05 -- common/autotest_common.sh@10 -- # set +x 00:42:46.379 ************************************ 00:42:46.379 START TEST keyring_file 00:42:46.379 ************************************ 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:46.379 * Looking for test storage... 00:42:46.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.379 --rc genhtml_branch_coverage=1 00:42:46.379 --rc genhtml_function_coverage=1 00:42:46.379 --rc genhtml_legend=1 00:42:46.379 --rc geninfo_all_blocks=1 00:42:46.379 --rc geninfo_unexecuted_blocks=1 00:42:46.379 00:42:46.379 ' 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.379 --rc genhtml_branch_coverage=1 00:42:46.379 --rc genhtml_function_coverage=1 00:42:46.379 --rc genhtml_legend=1 00:42:46.379 --rc geninfo_all_blocks=1 00:42:46.379 --rc geninfo_unexecuted_blocks=1 00:42:46.379 00:42:46.379 ' 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.379 --rc genhtml_branch_coverage=1 00:42:46.379 --rc genhtml_function_coverage=1 00:42:46.379 --rc genhtml_legend=1 00:42:46.379 --rc geninfo_all_blocks=1 00:42:46.379 --rc geninfo_unexecuted_blocks=1 00:42:46.379 00:42:46.379 ' 00:42:46.379 06:54:05 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:46.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.379 --rc genhtml_branch_coverage=1 00:42:46.379 --rc genhtml_function_coverage=1 00:42:46.379 --rc genhtml_legend=1 00:42:46.379 --rc geninfo_all_blocks=1 00:42:46.379 --rc geninfo_unexecuted_blocks=1 00:42:46.379 00:42:46.379 ' 00:42:46.379 06:54:05 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:46.379 06:54:05 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:46.379 06:54:05 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:46.379 06:54:05 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:46.380 06:54:05 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:46.380 06:54:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.380 06:54:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.380 06:54:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.380 06:54:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:46.380 06:54:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:46.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:46.380 06:54:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:46.380 06:54:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:46.380 06:54:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:46.380 06:54:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:46.380 06:54:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:46.380 06:54:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.V8l8yHId0M 00:42:46.380 06:54:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:46.380 06:54:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.V8l8yHId0M 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.V8l8yHId0M 00:42:46.380 06:54:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.V8l8yHId0M 00:42:46.380 06:54:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HMDv9CB07q 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:46.380 06:54:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:46.380 06:54:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:46.380 06:54:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:46.380 06:54:06 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:46.380 06:54:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:46.380 06:54:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HMDv9CB07q 00:42:46.380 06:54:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HMDv9CB07q 00:42:46.380 06:54:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.HMDv9CB07q 00:42:46.380 06:54:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=3051959 00:42:46.380 06:54:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3051959 00:42:46.380 06:54:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:46.380 06:54:06 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3051959 ']' 00:42:46.380 06:54:06 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:46.380 06:54:06 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:46.380 06:54:06 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:46.380 06:54:06 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:46.380 06:54:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:46.380 [2024-11-20 06:54:06.165599] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:46.380 [2024-11-20 06:54:06.165676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051959 ] 00:42:46.380 [2024-11-20 06:54:06.260867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:46.641 [2024-11-20 06:54:06.314638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.213 06:54:06 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:47.213 06:54:06 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:47.213 06:54:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:47.213 06:54:06 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.213 06:54:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.213 [2024-11-20 06:54:06.995274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:47.213 null0 00:42:47.213 [2024-11-20 06:54:07.027304] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:47.213 [2024-11-20 06:54:07.027665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.213 06:54:07 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.213 06:54:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.213 [2024-11-20 06:54:07.059366] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:47.213 request: 00:42:47.213 { 00:42:47.213 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:47.213 "secure_channel": false, 00:42:47.213 "listen_address": { 00:42:47.213 "trtype": "tcp", 00:42:47.213 "traddr": "127.0.0.1", 00:42:47.213 "trsvcid": "4420" 00:42:47.213 }, 00:42:47.213 "method": "nvmf_subsystem_add_listener", 00:42:47.213 "req_id": 1 00:42:47.213 } 00:42:47.214 Got JSON-RPC error response 00:42:47.214 response: 00:42:47.214 { 00:42:47.214 "code": -32602, 00:42:47.214 "message": "Invalid parameters" 00:42:47.214 } 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:47.214 06:54:07 keyring_file -- keyring/file.sh@47 -- # bperfpid=3052021 00:42:47.214 06:54:07 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3052021 /var/tmp/bperf.sock 00:42:47.214 06:54:07 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3052021 ']' 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:47.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:47.214 06:54:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.214 [2024-11-20 06:54:07.120083] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:47.214 [2024-11-20 06:54:07.120146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052021 ] 00:42:47.474 [2024-11-20 06:54:07.211587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.474 [2024-11-20 06:54:07.265129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:48.047 06:54:07 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:48.047 06:54:07 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:48.047 06:54:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:48.047 06:54:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:48.308 06:54:08 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HMDv9CB07q 00:42:48.308 06:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HMDv9CB07q 00:42:48.568 06:54:08 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:48.568 06:54:08 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:48.568 06:54:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.568 06:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.568 06:54:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:48.830 06:54:08 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.V8l8yHId0M == \/\t\m\p\/\t\m\p\.\V\8\l\8\y\H\I\d\0\M ]] 00:42:48.830 06:54:08 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:48.830 06:54:08 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.830 06:54:08 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.HMDv9CB07q == \/\t\m\p\/\t\m\p\.\H\M\D\v\9\C\B\0\7\q ]] 00:42:48.830 06:54:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.830 06:54:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:49.091 06:54:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:49.091 06:54:08 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:49.091 06:54:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:49.091 06:54:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:49.091 06:54:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.091 06:54:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.091 06:54:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:49.351 06:54:09 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:49.351 06:54:09 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:49.351 06:54:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:49.612 [2024-11-20 06:54:09.273185] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:49.612 nvme0n1 00:42:49.612 06:54:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:49.612 06:54:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:49.612 06:54:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:49.612 06:54:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.612 06:54:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.612 06:54:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:49.873 06:54:09 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:49.873 06:54:09 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:49.873 06:54:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:49.873 06:54:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:49.873 06:54:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:49.873 06:54:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:49.873 06:54:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:49.873 06:54:09 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:49.873 06:54:09 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:50.135 Running I/O for 1 seconds... 00:42:51.078 17481.00 IOPS, 68.29 MiB/s 00:42:51.078 Latency(us) 00:42:51.078 [2024-11-20T05:54:10.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.078 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:51.078 nvme0n1 : 1.00 17540.80 68.52 0.00 0.00 7283.86 2471.25 19879.25 00:42:51.078 [2024-11-20T05:54:10.998Z] =================================================================================================================== 00:42:51.078 [2024-11-20T05:54:10.998Z] Total : 17540.80 68.52 0.00 0.00 7283.86 2471.25 19879.25 00:42:51.078 { 00:42:51.078 "results": [ 00:42:51.078 { 00:42:51.078 "job": "nvme0n1", 00:42:51.078 "core_mask": "0x2", 00:42:51.078 "workload": "randrw", 00:42:51.078 "percentage": 50, 00:42:51.078 "status": "finished", 00:42:51.078 "queue_depth": 128, 00:42:51.078 "io_size": 4096, 00:42:51.078 "runtime": 1.003945, 00:42:51.078 "iops": 17540.801537932854, 00:42:51.078 "mibps": 68.51875600755021, 00:42:51.078 "io_failed": 0, 00:42:51.078 "io_timeout": 0, 00:42:51.078 "avg_latency_us": 7283.857565777021, 00:42:51.078 "min_latency_us": 2471.2533333333336, 00:42:51.078 "max_latency_us": 19879.253333333334 00:42:51.078 } 00:42:51.078 ], 00:42:51.078 "core_count": 1 00:42:51.078 } 00:42:51.078 06:54:10 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:51.078 06:54:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:51.340 06:54:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.340 06:54:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:51.340 06:54:11 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:51.340 06:54:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.610 06:54:11 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:51.610 06:54:11 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:51.610 06:54:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:51.610 06:54:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:51.610 06:54:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:51.610 06:54:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:51.610 06:54:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:51.610 06:54:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:51.610 06:54:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:51.610 06:54:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:51.870 [2024-11-20 06:54:11.570472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:51.870 [2024-11-20 06:54:11.570621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125acb0 (107): Transport endpoint is not connected 00:42:51.870 [2024-11-20 06:54:11.571617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125acb0 (9): Bad file descriptor 00:42:51.870 [2024-11-20 06:54:11.572618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:51.870 [2024-11-20 06:54:11.572625] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:51.870 [2024-11-20 06:54:11.572631] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:51.870 [2024-11-20 06:54:11.572637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:51.870 request: 00:42:51.870 { 00:42:51.870 "name": "nvme0", 00:42:51.870 "trtype": "tcp", 00:42:51.870 "traddr": "127.0.0.1", 00:42:51.870 "adrfam": "ipv4", 00:42:51.870 "trsvcid": "4420", 00:42:51.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:51.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:51.870 "prchk_reftag": false, 00:42:51.870 "prchk_guard": false, 00:42:51.870 "hdgst": false, 00:42:51.870 "ddgst": false, 00:42:51.870 "psk": "key1", 00:42:51.870 "allow_unrecognized_csi": false, 00:42:51.870 "method": "bdev_nvme_attach_controller", 00:42:51.870 "req_id": 1 00:42:51.870 } 00:42:51.870 Got JSON-RPC error response 00:42:51.870 response: 00:42:51.870 { 00:42:51.870 "code": -5, 00:42:51.870 "message": "Input/output error" 00:42:51.870 } 00:42:51.870 06:54:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:51.870 06:54:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:51.870 06:54:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:51.870 06:54:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:51.870 06:54:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:51.870 06:54:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:51.870 06:54:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:51.870 06:54:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.870 06:54:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:51.870 06:54:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.130 06:54:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:52.130 06:54:11 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:52.130 06:54:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:52.130 06:54:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.130 06:54:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.130 06:54:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:52.130 06:54:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.130 06:54:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:52.130 06:54:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:52.130 06:54:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:52.390 06:54:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:52.390 06:54:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:52.650 06:54:12 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:52.650 06:54:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:52.650 06:54:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.650 06:54:12 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:52.650 06:54:12 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.V8l8yHId0M 00:42:52.650 06:54:12 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:52.650 06:54:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:52.650 06:54:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:52.650 06:54:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:52.650 06:54:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:52.651 06:54:12 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:52.651 06:54:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:52.651 06:54:12 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:52.651 06:54:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:52.911 [2024-11-20 06:54:12.661441] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.V8l8yHId0M': 0100660 00:42:52.911 [2024-11-20 06:54:12.661460] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:52.911 request: 00:42:52.911 { 00:42:52.911 "name": "key0", 00:42:52.911 "path": "/tmp/tmp.V8l8yHId0M", 00:42:52.911 "method": "keyring_file_add_key", 00:42:52.911 "req_id": 1 00:42:52.911 } 00:42:52.911 Got JSON-RPC error response 00:42:52.911 response: 00:42:52.911 { 00:42:52.911 "code": -1, 00:42:52.911 "message": "Operation not permitted" 00:42:52.911 } 00:42:52.911 06:54:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:52.911 06:54:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:52.911 06:54:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:52.911 06:54:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:52.911 06:54:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.V8l8yHId0M 00:42:52.911 06:54:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:52.911 06:54:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.V8l8yHId0M 00:42:53.172 06:54:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.V8l8yHId0M 00:42:53.172 06:54:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:53.172 06:54:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:53.172 06:54:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.172 06:54:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.172 06:54:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:53.172 06:54:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.172 06:54:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:53.172 06:54:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.172 06:54:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:53.172 06:54:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.172 06:54:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:53.172 06:54:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.172 06:54:13 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:53.172 06:54:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.173 06:54:13 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.173 06:54:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.434 [2024-11-20 06:54:13.182776] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.V8l8yHId0M': No such file or directory 00:42:53.434 [2024-11-20 06:54:13.182792] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:53.434 [2024-11-20 06:54:13.182811] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:53.434 [2024-11-20 06:54:13.182817] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:53.434 [2024-11-20 06:54:13.182823] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:53.434 [2024-11-20 06:54:13.182828] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:53.434 request: 00:42:53.434 { 00:42:53.434 "name": "nvme0", 00:42:53.434 "trtype": "tcp", 00:42:53.434 "traddr": "127.0.0.1", 00:42:53.434 "adrfam": "ipv4", 00:42:53.434 "trsvcid": "4420", 00:42:53.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:53.434 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:53.434 "prchk_reftag": false, 00:42:53.434 "prchk_guard": false, 00:42:53.434 "hdgst": false, 00:42:53.434 "ddgst": false, 00:42:53.434 "psk": "key0", 00:42:53.434 "allow_unrecognized_csi": false, 00:42:53.434 "method": "bdev_nvme_attach_controller", 00:42:53.434 "req_id": 1 00:42:53.434 } 00:42:53.434 Got JSON-RPC error response 00:42:53.434 response: 00:42:53.434 { 00:42:53.434 "code": -19, 00:42:53.434 "message": "No such device" 00:42:53.434 } 00:42:53.434 06:54:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:53.434 06:54:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:53.434 06:54:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:53.434 06:54:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:53.434 06:54:13 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:53.434 06:54:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:53.695 06:54:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oiXabKNnrv 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:53.695 06:54:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:53.695 06:54:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:53.695 06:54:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:53.695 06:54:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:53.695 06:54:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:53.695 06:54:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oiXabKNnrv 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oiXabKNnrv 00:42:53.695 06:54:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.oiXabKNnrv 00:42:53.695 06:54:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oiXabKNnrv 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oiXabKNnrv 00:42:53.695 06:54:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.695 06:54:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.957 nvme0n1 00:42:53.957 06:54:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:53.957 06:54:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:53.957 06:54:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.957 06:54:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.957 06:54:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.957 06:54:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:54.218 06:54:14 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:54.218 06:54:14 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:54.218 06:54:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:54.478 06:54:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:54.478 06:54:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:54.478 06:54:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:54.478 06:54:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.478 06:54:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:54.739 06:54:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:54.739 06:54:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:54.739 06:54:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:54.739 06:54:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:54.739 06:54:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:54.739 06:54:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:54.739 06:54:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:54.739 06:54:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:54.739 06:54:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:54.739 06:54:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:55.001 06:54:14 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:55.001 06:54:14 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:55.001 06:54:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.262 06:54:14 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:55.262 06:54:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oiXabKNnrv 00:42:55.262 06:54:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oiXabKNnrv 00:42:55.262 06:54:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HMDv9CB07q 00:42:55.262 06:54:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HMDv9CB07q 00:42:55.524 06:54:15 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.524 06:54:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:55.785 nvme0n1 00:42:55.785 06:54:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:55.785 06:54:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:56.047 06:54:15 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:56.047 "subsystems": [ 00:42:56.047 { 00:42:56.047 "subsystem": "keyring", 00:42:56.047 "config": [ 00:42:56.047 { 00:42:56.047 "method": "keyring_file_add_key", 00:42:56.047 "params": { 00:42:56.047 "name": "key0", 00:42:56.047 "path": "/tmp/tmp.oiXabKNnrv" 00:42:56.047 } 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "method": "keyring_file_add_key", 00:42:56.047 "params": { 00:42:56.047 "name": "key1", 00:42:56.047 "path": "/tmp/tmp.HMDv9CB07q" 00:42:56.047 } 00:42:56.047 } 00:42:56.047 ] 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "subsystem": "iobuf", 00:42:56.047 "config": [ 00:42:56.047 { 00:42:56.047 "method": "iobuf_set_options", 00:42:56.047 "params": { 00:42:56.047 "small_pool_count": 8192, 00:42:56.047 "large_pool_count": 1024, 00:42:56.047 "small_bufsize": 8192, 00:42:56.047 "large_bufsize": 135168, 00:42:56.047 "enable_numa": false 00:42:56.047 } 00:42:56.047 } 00:42:56.047 ] 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "subsystem": "sock", 00:42:56.047 "config": [ 00:42:56.047 { 00:42:56.047 "method": "sock_set_default_impl", 00:42:56.047 "params": { 00:42:56.047 "impl_name": "posix" 00:42:56.047 } 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "method": "sock_impl_set_options", 00:42:56.047 "params": { 00:42:56.047 "impl_name": "ssl", 00:42:56.047 "recv_buf_size": 4096, 00:42:56.047 "send_buf_size": 4096, 00:42:56.047 "enable_recv_pipe": true, 00:42:56.047 "enable_quickack": false, 00:42:56.047 "enable_placement_id": 0, 00:42:56.047 "enable_zerocopy_send_server": true, 00:42:56.047 "enable_zerocopy_send_client": false, 00:42:56.047 "zerocopy_threshold": 0, 00:42:56.047 "tls_version": 0, 00:42:56.047 "enable_ktls": false 00:42:56.047 } 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "method": "sock_impl_set_options", 00:42:56.047 "params": { 00:42:56.047 "impl_name": "posix", 00:42:56.047 "recv_buf_size": 2097152, 00:42:56.047 "send_buf_size": 2097152, 00:42:56.047 "enable_recv_pipe": true, 00:42:56.047 "enable_quickack": false, 00:42:56.047 "enable_placement_id": 0, 00:42:56.047 "enable_zerocopy_send_server": true, 00:42:56.047 "enable_zerocopy_send_client": false, 00:42:56.047 "zerocopy_threshold": 0, 00:42:56.047 "tls_version": 0, 00:42:56.047 "enable_ktls": false 00:42:56.047 } 00:42:56.047 } 00:42:56.047 ] 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "subsystem": "vmd", 00:42:56.047 "config": [] 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "subsystem": "accel", 00:42:56.047 "config": [ 00:42:56.047 { 00:42:56.047 "method": "accel_set_options", 00:42:56.047 "params": { 00:42:56.047 "small_cache_size": 128, 00:42:56.047 "large_cache_size": 16, 00:42:56.047 "task_count": 2048, 00:42:56.047 "sequence_count": 2048, 00:42:56.047 "buf_count": 2048 00:42:56.047 } 00:42:56.047 } 00:42:56.047 ] 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "subsystem": "bdev", 00:42:56.047 "config": [ 00:42:56.047 { 00:42:56.047 "method": "bdev_set_options", 00:42:56.047 "params": { 00:42:56.047 "bdev_io_pool_size": 65535, 00:42:56.047 "bdev_io_cache_size": 256, 00:42:56.047 "bdev_auto_examine": true, 00:42:56.047 "iobuf_small_cache_size": 128, 00:42:56.047 "iobuf_large_cache_size": 16 00:42:56.047 } 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "method": "bdev_raid_set_options", 00:42:56.047 "params": { 00:42:56.047 "process_window_size_kb": 1024, 00:42:56.047 "process_max_bandwidth_mb_sec": 0 00:42:56.047 } 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "method": "bdev_iscsi_set_options", 00:42:56.047 "params": { 00:42:56.047 "timeout_sec": 30 00:42:56.047 } 00:42:56.047 }, 00:42:56.047 { 00:42:56.047 "method": "bdev_nvme_set_options", 00:42:56.047 "params": { 00:42:56.047 "action_on_timeout": "none", 00:42:56.047 "timeout_us": 0, 00:42:56.048 "timeout_admin_us": 0, 00:42:56.048 "keep_alive_timeout_ms": 10000, 00:42:56.048 "arbitration_burst": 0, 00:42:56.048 "low_priority_weight": 0, 00:42:56.048 "medium_priority_weight": 0, 00:42:56.048 "high_priority_weight": 0, 00:42:56.048 "nvme_adminq_poll_period_us": 10000, 00:42:56.048 "nvme_ioq_poll_period_us": 0, 00:42:56.048 "io_queue_requests": 512, 00:42:56.048 "delay_cmd_submit": true, 00:42:56.048 "transport_retry_count": 4, 00:42:56.048 "bdev_retry_count": 3, 00:42:56.048 "transport_ack_timeout": 0, 00:42:56.048 "ctrlr_loss_timeout_sec": 0, 00:42:56.048 "reconnect_delay_sec": 0, 00:42:56.048 "fast_io_fail_timeout_sec": 0, 00:42:56.048 "disable_auto_failback": false, 00:42:56.048 "generate_uuids": false, 00:42:56.048 "transport_tos": 0, 00:42:56.048 "nvme_error_stat": false, 00:42:56.048 "rdma_srq_size": 0, 00:42:56.048 "io_path_stat": false, 00:42:56.048 "allow_accel_sequence": false, 00:42:56.048 "rdma_max_cq_size": 0, 00:42:56.048 "rdma_cm_event_timeout_ms": 0, 00:42:56.048 "dhchap_digests": [ 00:42:56.048 "sha256", 00:42:56.048 "sha384", 00:42:56.048 "sha512" 00:42:56.048 ], 00:42:56.048 "dhchap_dhgroups": [ 00:42:56.048 "null", 00:42:56.048 "ffdhe2048", 00:42:56.048 "ffdhe3072", 00:42:56.048 "ffdhe4096", 00:42:56.048 "ffdhe6144", 00:42:56.048 "ffdhe8192" 00:42:56.048 ] 00:42:56.048 } 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "method": "bdev_nvme_attach_controller", 00:42:56.048 "params": { 00:42:56.048 "name": "nvme0", 00:42:56.048 "trtype": "TCP", 00:42:56.048 "adrfam": "IPv4", 00:42:56.048 "traddr": "127.0.0.1", 00:42:56.048 "trsvcid": "4420", 00:42:56.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:56.048 "prchk_reftag": false, 00:42:56.048 "prchk_guard": false, 00:42:56.048 "ctrlr_loss_timeout_sec": 0, 00:42:56.048 "reconnect_delay_sec": 0, 00:42:56.048 "fast_io_fail_timeout_sec": 0, 00:42:56.048 "psk": "key0", 00:42:56.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:56.048 "hdgst": false, 00:42:56.048 "ddgst": false, 00:42:56.048 "multipath": "multipath" 00:42:56.048 } 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "method": "bdev_nvme_set_hotplug", 00:42:56.048 "params": { 00:42:56.048 "period_us": 100000, 00:42:56.048 "enable": false 00:42:56.048 } 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "method": "bdev_wait_for_examine" 00:42:56.048 } 00:42:56.048 ] 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "subsystem": "nbd", 00:42:56.048 "config": [] 00:42:56.048 } 00:42:56.048 ] 00:42:56.048 }' 00:42:56.048 06:54:15 keyring_file -- keyring/file.sh@115 -- # killprocess 3052021 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3052021 ']' 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3052021 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3052021 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3052021' 00:42:56.048 killing process with pid 3052021 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@971 -- # kill 3052021 00:42:56.048 Received shutdown signal, test time was about 1.000000 seconds 00:42:56.048 00:42:56.048 Latency(us) 00:42:56.048 [2024-11-20T05:54:15.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:56.048 [2024-11-20T05:54:15.968Z] =================================================================================================================== 00:42:56.048 [2024-11-20T05:54:15.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@976 -- # wait 3052021 00:42:56.048 06:54:15 keyring_file -- keyring/file.sh@118 -- # bperfpid=3054273 00:42:56.048 06:54:15 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3054273 /var/tmp/bperf.sock 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3054273 ']' 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:56.048 06:54:15 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:56.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:56.048 06:54:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:56.048 06:54:15 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:56.048 "subsystems": [ 00:42:56.048 { 00:42:56.048 "subsystem": "keyring", 00:42:56.048 "config": [ 00:42:56.048 { 00:42:56.048 "method": "keyring_file_add_key", 00:42:56.048 "params": { 00:42:56.048 "name": "key0", 00:42:56.048 "path": "/tmp/tmp.oiXabKNnrv" 00:42:56.048 } 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "method": "keyring_file_add_key", 00:42:56.048 "params": { 00:42:56.048 "name": "key1", 00:42:56.048 "path": "/tmp/tmp.HMDv9CB07q" 00:42:56.048 } 00:42:56.048 } 00:42:56.048 ] 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "subsystem": "iobuf", 00:42:56.048 "config": [ 00:42:56.048 { 00:42:56.048 "method": "iobuf_set_options", 00:42:56.048 "params": { 00:42:56.048 "small_pool_count": 8192, 00:42:56.048 "large_pool_count": 1024, 00:42:56.048 "small_bufsize": 8192, 00:42:56.048 "large_bufsize": 135168, 00:42:56.048 "enable_numa": false 00:42:56.048 } 00:42:56.048 } 00:42:56.048 ] 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "subsystem": "sock", 00:42:56.048 "config": [ 00:42:56.048 { 00:42:56.048 "method": "sock_set_default_impl", 00:42:56.048 "params": { 00:42:56.048 "impl_name": "posix" 00:42:56.048 } 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "method": "sock_impl_set_options", 00:42:56.048 "params": { 00:42:56.048 "impl_name": "ssl", 00:42:56.048 "recv_buf_size": 4096, 00:42:56.048 "send_buf_size": 4096, 00:42:56.048 "enable_recv_pipe": true, 00:42:56.048 "enable_quickack": false, 00:42:56.048 "enable_placement_id": 0, 00:42:56.048 "enable_zerocopy_send_server": true, 00:42:56.048 "enable_zerocopy_send_client": false, 00:42:56.048 "zerocopy_threshold": 0, 00:42:56.048 "tls_version": 0, 00:42:56.048 "enable_ktls": false 00:42:56.048 } 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "method": "sock_impl_set_options", 00:42:56.048 "params": { 00:42:56.048 "impl_name": "posix", 00:42:56.048 "recv_buf_size": 2097152, 00:42:56.048 "send_buf_size": 2097152, 00:42:56.048 "enable_recv_pipe": true, 00:42:56.048 "enable_quickack": false, 00:42:56.048 "enable_placement_id": 0, 00:42:56.048 "enable_zerocopy_send_server": true, 00:42:56.048 "enable_zerocopy_send_client": false, 00:42:56.048 "zerocopy_threshold": 0, 00:42:56.048 "tls_version": 0, 00:42:56.048 "enable_ktls": false 00:42:56.048 } 00:42:56.048 } 00:42:56.048 ] 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "subsystem": "vmd", 00:42:56.048 "config": [] 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "subsystem": "accel", 00:42:56.048 "config": [ 00:42:56.048 { 00:42:56.048 "method": "accel_set_options", 00:42:56.048 "params": { 00:42:56.048 "small_cache_size": 128, 00:42:56.048 "large_cache_size": 16, 00:42:56.048 "task_count": 2048, 00:42:56.048 "sequence_count": 2048, 00:42:56.048 "buf_count": 2048 00:42:56.048 } 00:42:56.048 } 00:42:56.048 ] 00:42:56.048 }, 00:42:56.048 { 00:42:56.048 "subsystem": "bdev", 00:42:56.048 "config": [ 00:42:56.048 { 00:42:56.048 "method": "bdev_set_options", 00:42:56.048 "params": { 00:42:56.048 "bdev_io_pool_size": 65535, 00:42:56.048 "bdev_io_cache_size": 256, 00:42:56.049 "bdev_auto_examine": true, 00:42:56.049 "iobuf_small_cache_size": 128, 00:42:56.049 "iobuf_large_cache_size": 16 00:42:56.049 } 00:42:56.049 }, 00:42:56.049 { 00:42:56.049 "method": "bdev_raid_set_options", 00:42:56.049 "params": { 00:42:56.049 "process_window_size_kb": 1024, 00:42:56.049 "process_max_bandwidth_mb_sec": 0 00:42:56.049 } 00:42:56.049 }, 00:42:56.049 { 00:42:56.049 "method": "bdev_iscsi_set_options", 00:42:56.049 "params": { 00:42:56.049 "timeout_sec": 30 00:42:56.049 } 00:42:56.049 }, 00:42:56.049 { 00:42:56.049 "method": "bdev_nvme_set_options", 00:42:56.049 "params": { 00:42:56.049 "action_on_timeout": "none", 00:42:56.049 "timeout_us": 0, 00:42:56.049 "timeout_admin_us": 0, 00:42:56.049 "keep_alive_timeout_ms": 10000, 00:42:56.049 "arbitration_burst": 0, 00:42:56.049 "low_priority_weight": 0, 00:42:56.049 "medium_priority_weight": 0, 00:42:56.049 "high_priority_weight": 0, 00:42:56.049 "nvme_adminq_poll_period_us": 10000, 00:42:56.049 "nvme_ioq_poll_period_us": 0, 00:42:56.049 "io_queue_requests": 512, 00:42:56.049 "delay_cmd_submit": true, 00:42:56.049 "transport_retry_count": 4, 00:42:56.049 "bdev_retry_count": 3, 00:42:56.049 "transport_ack_timeout": 0, 00:42:56.049 "ctrlr_loss_timeout_sec": 0, 00:42:56.049 "reconnect_delay_sec": 0, 00:42:56.049 "fast_io_fail_timeout_sec": 0, 00:42:56.049 "disable_auto_failback": false, 00:42:56.049 "generate_uuids": false, 00:42:56.049 "transport_tos": 0, 00:42:56.049 "nvme_error_stat": false, 00:42:56.049 "rdma_srq_size": 0, 00:42:56.049 "io_path_stat": false, 00:42:56.049 "allow_accel_sequence": false, 00:42:56.049 "rdma_max_cq_size": 0, 00:42:56.049 "rdma_cm_event_timeout_ms": 0, 00:42:56.049 "dhchap_digests": [ 00:42:56.049 "sha256", 00:42:56.049 "sha384", 00:42:56.049 "sha512" 00:42:56.049 ], 00:42:56.049 "dhchap_dhgroups": [ 00:42:56.049 "null", 00:42:56.049 "ffdhe2048", 00:42:56.049 "ffdhe3072", 00:42:56.049 "ffdhe4096", 00:42:56.049 "ffdhe6144", 00:42:56.049 "ffdhe8192" 00:42:56.049 ] 00:42:56.049 } 00:42:56.049 }, 00:42:56.049 { 00:42:56.049 "method": "bdev_nvme_attach_controller", 00:42:56.049 "params": { 00:42:56.049 "name": "nvme0", 00:42:56.049 "trtype": "TCP", 00:42:56.049 "adrfam": "IPv4", 00:42:56.049 "traddr": "127.0.0.1", 00:42:56.049 "trsvcid": "4420", 00:42:56.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:56.049 "prchk_reftag": false, 00:42:56.049 "prchk_guard": false, 00:42:56.049 "ctrlr_loss_timeout_sec": 0, 00:42:56.049 "reconnect_delay_sec": 0, 00:42:56.049 "fast_io_fail_timeout_sec": 0, 00:42:56.049 "psk": "key0", 00:42:56.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:56.049 "hdgst": false, 00:42:56.049 "ddgst": false, 00:42:56.049 "multipath": "multipath" 00:42:56.049 } 00:42:56.049 }, 00:42:56.049 { 00:42:56.049 "method": "bdev_nvme_set_hotplug", 00:42:56.049 "params": { 00:42:56.049 "period_us": 100000, 00:42:56.049 "enable": false 00:42:56.049 } 00:42:56.049 }, 00:42:56.049 { 00:42:56.049 "method": "bdev_wait_for_examine" 00:42:56.049 } 00:42:56.049 ] 00:42:56.049 }, 00:42:56.049 { 00:42:56.049 "subsystem": "nbd", 00:42:56.049 "config": [] 00:42:56.049 } 00:42:56.049 ] 00:42:56.049 }' 00:42:56.309 [2024-11-20 06:54:16.016136] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:56.309 [2024-11-20 06:54:16.016192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054273 ] 00:42:56.309 [2024-11-20 06:54:16.100434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:56.309 [2024-11-20 06:54:16.129683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:56.570 [2024-11-20 06:54:16.273837] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:57.140 06:54:16 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:57.140 06:54:16 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:42:57.140 06:54:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:57.140 06:54:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:57.140 06:54:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.140 06:54:16 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:57.140 06:54:16 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:57.140 06:54:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:57.140 06:54:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:57.140 06:54:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.140 06:54:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.140 06:54:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:57.401 06:54:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:57.401 06:54:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:57.401 06:54:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:57.401 06:54:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:57.401 06:54:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:57.401 06:54:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.401 06:54:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:57.401 06:54:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:57.661 06:54:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:57.661 06:54:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:57.661 06:54:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:57.662 06:54:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:57.662 06:54:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:57.662 06:54:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oiXabKNnrv /tmp/tmp.HMDv9CB07q 00:42:57.662 06:54:17 keyring_file -- keyring/file.sh@20 -- # killprocess 3054273 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3054273 ']' 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3054273 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3054273 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3054273' 00:42:57.662 killing process with pid 3054273 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@971 -- # kill 3054273 00:42:57.662 Received shutdown signal, test time was about 1.000000 seconds 00:42:57.662 00:42:57.662 Latency(us) 00:42:57.662 [2024-11-20T05:54:17.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:57.662 [2024-11-20T05:54:17.582Z] =================================================================================================================== 00:42:57.662 [2024-11-20T05:54:17.582Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:57.662 06:54:17 keyring_file -- common/autotest_common.sh@976 -- # wait 3054273 00:42:57.922 06:54:17 keyring_file -- keyring/file.sh@21 -- # killprocess 3051959 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3051959 ']' 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3051959 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@957 -- # uname 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3051959 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3051959' 00:42:57.922 killing process with pid 3051959 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@971 -- # kill 3051959 00:42:57.922 06:54:17 keyring_file -- common/autotest_common.sh@976 -- # wait 3051959 00:42:58.182 00:42:58.182 real 0m12.193s 00:42:58.182 user 0m29.456s 00:42:58.182 sys 0m2.764s 00:42:58.182 06:54:17 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:58.182 06:54:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:58.182 ************************************ 00:42:58.182 END TEST keyring_file 00:42:58.182 ************************************ 00:42:58.182 06:54:17 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:58.182 06:54:17 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:58.182 06:54:17 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:58.182 06:54:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:58.182 06:54:17 -- common/autotest_common.sh@10 -- # set +x 00:42:58.182 ************************************ 00:42:58.182 START TEST keyring_linux 00:42:58.182 ************************************ 00:42:58.182 06:54:17 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:58.182 Joined session keyring: 824226941 00:42:58.182 * Looking for test storage... 00:42:58.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:58.182 06:54:18 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:58.182 06:54:18 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:42:58.182 06:54:18 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:58.443 06:54:18 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:58.443 06:54:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:58.444 06:54:18 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:58.444 06:54:18 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.444 --rc genhtml_branch_coverage=1 00:42:58.444 --rc genhtml_function_coverage=1 00:42:58.444 --rc genhtml_legend=1 00:42:58.444 --rc geninfo_all_blocks=1 00:42:58.444 --rc geninfo_unexecuted_blocks=1 00:42:58.444 00:42:58.444 ' 00:42:58.444 06:54:18 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.444 --rc genhtml_branch_coverage=1 00:42:58.444 --rc genhtml_function_coverage=1 00:42:58.444 --rc genhtml_legend=1 00:42:58.444 --rc geninfo_all_blocks=1 00:42:58.444 --rc geninfo_unexecuted_blocks=1 00:42:58.444 00:42:58.444 ' 00:42:58.444 06:54:18 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.444 --rc genhtml_branch_coverage=1 00:42:58.444 --rc genhtml_function_coverage=1 00:42:58.444 --rc genhtml_legend=1 00:42:58.444 --rc geninfo_all_blocks=1 00:42:58.444 --rc geninfo_unexecuted_blocks=1 00:42:58.444 00:42:58.444 ' 00:42:58.444 06:54:18 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:58.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.444 --rc genhtml_branch_coverage=1 00:42:58.444 --rc genhtml_function_coverage=1 00:42:58.444 --rc genhtml_legend=1 00:42:58.444 --rc geninfo_all_blocks=1 00:42:58.444 --rc geninfo_unexecuted_blocks=1 00:42:58.444 00:42:58.444 ' 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:58.444 06:54:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:58.444 06:54:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.444 06:54:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.444 06:54:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.444 06:54:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:58.444 06:54:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:58.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:58.444 06:54:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:58.444 /tmp/:spdk-test:key0 00:42:58.444 06:54:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:58.444 06:54:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:58.445 06:54:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:58.445 06:54:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:58.445 06:54:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:58.445 06:54:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:42:58.445 06:54:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:58.445 06:54:18 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:58.445 06:54:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:42:58.445 06:54:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:42:58.445 06:54:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:58.445 06:54:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:58.445 /tmp/:spdk-test:key1 00:42:58.445 06:54:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3054716 00:42:58.445 06:54:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3054716 00:42:58.445 06:54:18 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:58.445 06:54:18 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3054716 ']' 00:42:58.445 06:54:18 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:58.445 06:54:18 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:58.445 06:54:18 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:58.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:58.445 06:54:18 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:58.445 06:54:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:58.445 [2024-11-20 06:54:18.349436] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:58.445 [2024-11-20 06:54:18.349494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3054716 ] 00:42:58.706 [2024-11-20 06:54:18.432608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:58.706 [2024-11-20 06:54:18.463024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:42:59.279 06:54:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:59.279 [2024-11-20 06:54:19.126290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:59.279 null0 00:42:59.279 [2024-11-20 06:54:19.158338] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:59.279 [2024-11-20 06:54:19.158697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:59.279 06:54:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:59.279 746666862 00:42:59.279 06:54:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:59.279 482525500 00:42:59.279 06:54:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3055034 00:42:59.279 06:54:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3055034 /var/tmp/bperf.sock 00:42:59.279 06:54:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3055034 ']' 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:59.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:59.279 06:54:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:59.539 [2024-11-20 06:54:19.236578] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:59.539 [2024-11-20 06:54:19.236627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055034 ] 00:42:59.539 [2024-11-20 06:54:19.317969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:59.539 [2024-11-20 06:54:19.347907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:00.110 06:54:20 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:00.110 06:54:20 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:43:00.110 06:54:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:00.110 06:54:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:00.372 06:54:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:00.372 06:54:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:00.633 06:54:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:00.633 06:54:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:00.893 [2024-11-20 06:54:20.589690] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:00.893 nvme0n1 00:43:00.893 06:54:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:00.893 06:54:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:00.893 06:54:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:00.893 06:54:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:00.893 06:54:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:00.893 06:54:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.153 06:54:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:01.153 06:54:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:01.153 06:54:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:01.153 06:54:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:01.153 06:54:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:01.153 06:54:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.153 06:54:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:01.153 06:54:21 keyring_linux -- keyring/linux.sh@25 -- # sn=746666862 00:43:01.153 06:54:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:01.153 06:54:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:01.153 06:54:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 746666862 == \7\4\6\6\6\6\8\6\2 ]] 00:43:01.153 06:54:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 746666862 00:43:01.153 06:54:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:01.153 06:54:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:01.414 Running I/O for 1 seconds... 00:43:02.354 24374.00 IOPS, 95.21 MiB/s 00:43:02.354 Latency(us) 00:43:02.354 [2024-11-20T05:54:22.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:02.354 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:02.354 nvme0n1 : 1.01 24373.58 95.21 0.00 0.00 5236.72 4314.45 8738.13 00:43:02.354 [2024-11-20T05:54:22.274Z] =================================================================================================================== 00:43:02.354 [2024-11-20T05:54:22.274Z] Total : 24373.58 95.21 0.00 0.00 5236.72 4314.45 8738.13 00:43:02.354 { 00:43:02.354 "results": [ 00:43:02.354 { 00:43:02.354 "job": "nvme0n1", 00:43:02.354 "core_mask": "0x2", 00:43:02.354 "workload": "randread", 00:43:02.354 "status": "finished", 00:43:02.354 "queue_depth": 128, 00:43:02.354 "io_size": 4096, 00:43:02.354 "runtime": 1.005269, 00:43:02.354 "iops": 24373.5756300055, 00:43:02.354 "mibps": 95.20927980470898, 00:43:02.354 "io_failed": 0, 00:43:02.354 "io_timeout": 0, 00:43:02.354 "avg_latency_us": 5236.718112807118, 00:43:02.354 "min_latency_us": 4314.453333333333, 00:43:02.354 "max_latency_us": 8738.133333333333 00:43:02.354 } 00:43:02.354 ], 00:43:02.354 "core_count": 1 00:43:02.355 } 00:43:02.355 06:54:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:02.355 06:54:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:02.617 06:54:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:02.617 06:54:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:02.617 06:54:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:02.617 06:54:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:02.617 06:54:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:02.617 06:54:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:02.877 06:54:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:02.877 06:54:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:02.877 06:54:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:02.877 06:54:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:02.877 06:54:22 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:43:02.877 06:54:22 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:02.877 06:54:22 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:43:02.877 06:54:22 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:02.877 06:54:22 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:43:02.877 06:54:22 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:02.877 06:54:22 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:02.877 06:54:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:02.878 [2024-11-20 06:54:22.699832] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:02.878 [2024-11-20 06:54:22.700601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1509a60 (107): Transport endpoint is not connected 00:43:02.878 [2024-11-20 06:54:22.701597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1509a60 (9): Bad file descriptor 00:43:02.878 [2024-11-20 06:54:22.702599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:02.878 [2024-11-20 06:54:22.702606] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:02.878 [2024-11-20 06:54:22.702612] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:02.878 [2024-11-20 06:54:22.702618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:02.878 request: 00:43:02.878 { 00:43:02.878 "name": "nvme0", 00:43:02.878 "trtype": "tcp", 00:43:02.878 "traddr": "127.0.0.1", 00:43:02.878 "adrfam": "ipv4", 00:43:02.878 "trsvcid": "4420", 00:43:02.878 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.878 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:02.878 "prchk_reftag": false, 00:43:02.878 "prchk_guard": false, 00:43:02.878 "hdgst": false, 00:43:02.878 "ddgst": false, 00:43:02.878 "psk": ":spdk-test:key1", 00:43:02.878 "allow_unrecognized_csi": false, 00:43:02.878 "method": "bdev_nvme_attach_controller", 00:43:02.878 "req_id": 1 00:43:02.878 } 00:43:02.878 Got JSON-RPC error response 00:43:02.878 response: 00:43:02.878 { 00:43:02.878 "code": -5, 00:43:02.878 "message": "Input/output error" 00:43:02.878 } 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@33 -- # sn=746666862 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 746666862 00:43:02.878 1 links removed 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@33 -- # sn=482525500 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 482525500 00:43:02.878 1 links removed 00:43:02.878 06:54:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3055034 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3055034 ']' 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3055034 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:02.878 06:54:22 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3055034 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3055034' 00:43:03.139 killing process with pid 3055034 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@971 -- # kill 3055034 00:43:03.139 Received shutdown signal, test time was about 1.000000 seconds 00:43:03.139 00:43:03.139 Latency(us) 00:43:03.139 [2024-11-20T05:54:23.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:03.139 [2024-11-20T05:54:23.059Z] =================================================================================================================== 00:43:03.139 [2024-11-20T05:54:23.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@976 -- # wait 3055034 00:43:03.139 06:54:22 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3054716 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3054716 ']' 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3054716 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3054716 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3054716' 00:43:03.139 killing process with pid 3054716 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@971 -- # kill 3054716 00:43:03.139 06:54:22 keyring_linux -- common/autotest_common.sh@976 -- # wait 3054716 00:43:03.400 00:43:03.400 real 0m5.177s 00:43:03.400 user 0m9.726s 00:43:03.400 sys 0m1.395s 00:43:03.400 06:54:23 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:03.400 06:54:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:03.400 ************************************ 00:43:03.400 END TEST keyring_linux 00:43:03.400 ************************************ 00:43:03.400 06:54:23 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:03.400 06:54:23 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:43:03.400 06:54:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:03.400 06:54:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:03.400 06:54:23 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:43:03.400 06:54:23 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:43:03.400 06:54:23 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:43:03.400 06:54:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:03.400 06:54:23 -- common/autotest_common.sh@10 -- # set +x 00:43:03.400 06:54:23 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:43:03.400 06:54:23 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:43:03.400 06:54:23 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:43:03.400 06:54:23 -- common/autotest_common.sh@10 -- # set +x 00:43:11.540 INFO: APP EXITING 00:43:11.540 INFO: killing all VMs 00:43:11.540 INFO: killing vhost app 00:43:11.540 INFO: EXIT DONE 00:43:14.088 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:43:14.088 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:65:00.0 (144d a80a): Already using the nvme driver 00:43:14.348 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:43:14.348 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:43:14.608 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:43:14.608 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:43:14.608 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:43:14.608 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:43:18.809 Cleaning 00:43:18.809 Removing: /var/run/dpdk/spdk0/config 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:18.809 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:18.809 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:18.809 Removing: /var/run/dpdk/spdk1/config 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:18.809 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:18.809 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:18.809 Removing: /var/run/dpdk/spdk2/config 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:18.809 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:18.809 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:18.809 Removing: /var/run/dpdk/spdk3/config 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:18.809 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:18.809 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:18.809 Removing: /var/run/dpdk/spdk4/config 00:43:18.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:18.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:18.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:18.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:18.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:18.809 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:18.810 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:18.810 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:18.810 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:18.810 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:18.810 Removing: /dev/shm/bdev_svc_trace.1 00:43:18.810 Removing: /dev/shm/nvmf_trace.0 00:43:18.810 Removing: /dev/shm/spdk_tgt_trace.pid2471374 00:43:18.810 Removing: /var/run/dpdk/spdk0 00:43:18.810 Removing: /var/run/dpdk/spdk1 00:43:18.810 Removing: /var/run/dpdk/spdk2 00:43:18.810 Removing: /var/run/dpdk/spdk3 00:43:18.810 Removing: /var/run/dpdk/spdk4 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2469880 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2471374 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2472222 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2473263 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2473603 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2474692 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2474876 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2475163 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2476294 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2477032 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2477378 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2477711 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2478044 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2478370 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2478721 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2478871 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2479157 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2480529 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2484054 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2484353 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2484706 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2484880 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2485272 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2485589 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2485967 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2486252 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2486482 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2486682 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2486952 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2487054 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2487625 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2487961 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2488365 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2493561 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2498838 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2510954 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2511718 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2517061 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2517418 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2522832 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2529959 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2533052 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2546374 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2557383 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2559536 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2560720 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2581812 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2586605 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2643113 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2649709 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2657260 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2665310 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2665376 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2666389 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2667397 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2668456 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2669038 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2669178 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2669380 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2669541 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2669544 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2670550 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2671552 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2672560 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2673233 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2673243 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2673575 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2675016 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2676370 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2686124 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2720641 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2726105 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2728095 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2730397 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2730557 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2730805 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2731155 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2731867 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2734214 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2735338 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2735837 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2738956 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2739654 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2740372 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2745465 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2752203 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2752204 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2752205 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2756929 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2767249 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2772067 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2779306 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2780802 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2782581 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2784174 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2790474 00:43:18.810 Removing: /var/run/dpdk/spdk_pid2795754 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2800715 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2810096 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2810216 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2815309 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2815641 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2815778 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2816316 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2816321 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2821766 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2822566 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2828017 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2831137 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2837860 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2844550 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2855285 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2864001 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2864007 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2887175 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2887966 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2888652 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2889339 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2890400 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2891081 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2891763 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2892472 00:43:19.070 Removing: /var/run/dpdk/spdk_pid2898438 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2898712 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2905846 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2906223 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2912727 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2917781 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2929491 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2930170 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2935265 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2935612 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2940691 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2947563 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2951082 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2963301 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2974052 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2976053 00:43:19.071 Removing: /var/run/dpdk/spdk_pid2977065 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3000087 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3004922 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3008588 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3016226 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3016231 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3022419 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3024591 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3026797 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3028237 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3030433 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3031908 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3041817 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3042472 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3043126 00:43:19.071 Removing: /var/run/dpdk/spdk_pid3046014 00:43:19.331 Removing: /var/run/dpdk/spdk_pid3046426 00:43:19.331 Removing: /var/run/dpdk/spdk_pid3047045 00:43:19.331 Removing: /var/run/dpdk/spdk_pid3051959 00:43:19.331 Removing: /var/run/dpdk/spdk_pid3052021 00:43:19.331 Removing: /var/run/dpdk/spdk_pid3054273 00:43:19.331 Removing: /var/run/dpdk/spdk_pid3054716 00:43:19.331 Removing: /var/run/dpdk/spdk_pid3055034 00:43:19.331 Clean 00:43:19.331 06:54:39 -- common/autotest_common.sh@1451 -- # return 0 00:43:19.331 06:54:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:43:19.331 06:54:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:19.331 06:54:39 -- common/autotest_common.sh@10 -- # set +x 00:43:19.331 06:54:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:43:19.331 06:54:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:19.331 06:54:39 -- common/autotest_common.sh@10 -- # set +x 00:43:19.331 06:54:39 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:19.331 06:54:39 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:19.331 06:54:39 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:19.331 06:54:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:43:19.331 06:54:39 -- spdk/autotest.sh@394 -- # hostname 00:43:19.331 06:54:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:19.592 geninfo: WARNING: invalid characters removed from testname! 00:43:46.164 06:55:04 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:48.076 06:55:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:50.630 06:55:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:52.014 06:55:11 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:53.448 06:55:13 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:55.387 06:55:14 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:56.770 06:55:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:56.770 06:55:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:43:56.770 06:55:16 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:43:56.770 06:55:16 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:56.770 06:55:16 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:56.770 06:55:16 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:56.770 + [[ -n 2384446 ]] 00:43:56.770 + sudo kill 2384446 00:43:56.781 [Pipeline] } 00:43:56.799 [Pipeline] // stage 00:43:56.804 [Pipeline] } 00:43:56.821 [Pipeline] // timeout 00:43:56.826 [Pipeline] } 00:43:56.843 [Pipeline] // catchError 00:43:56.848 [Pipeline] } 00:43:56.865 [Pipeline] // wrap 00:43:56.871 [Pipeline] } 00:43:56.883 [Pipeline] // catchError 00:43:56.892 [Pipeline] stage 00:43:56.894 [Pipeline] { (Epilogue) 00:43:56.908 [Pipeline] catchError 00:43:56.910 [Pipeline] { 00:43:56.921 [Pipeline] echo 00:43:56.922 Cleanup processes 00:43:56.927 [Pipeline] sh 00:43:57.217 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:57.217 3064052 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:57.233 [Pipeline] sh 00:43:57.527 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:57.527 ++ grep -v 'sudo pgrep' 00:43:57.527 ++ awk '{print $1}' 00:43:57.527 + sudo kill -9 00:43:57.527 + true 00:43:57.540 [Pipeline] sh 00:43:57.830 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:10.073 [Pipeline] sh 00:44:10.360 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:10.360 Artifacts sizes are good 00:44:10.374 [Pipeline] archiveArtifacts 00:44:10.380 Archiving artifacts 00:44:10.516 [Pipeline] sh 00:44:10.803 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:10.818 [Pipeline] cleanWs 00:44:10.828 [WS-CLEANUP] Deleting project workspace... 00:44:10.828 [WS-CLEANUP] Deferred wipeout is used... 00:44:10.836 [WS-CLEANUP] done 00:44:10.838 [Pipeline] } 00:44:10.856 [Pipeline] // catchError 00:44:10.868 [Pipeline] sh 00:44:11.156 + logger -p user.info -t JENKINS-CI 00:44:11.167 [Pipeline] } 00:44:11.180 [Pipeline] // stage 00:44:11.185 [Pipeline] } 00:44:11.198 [Pipeline] // node 00:44:11.202 [Pipeline] End of Pipeline 00:44:11.239 Finished: SUCCESS